url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
600M
2.05B
node_id
stringlengths
18
32
number
int64
2
6.51k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2827/comments
https://api.github.com/repos/huggingface/datasets/issues/2827/events
https://github.com/huggingface/datasets/pull/2827
976,976,552
MDExOlB1bGxSZXF1ZXN0NzE3Nzg3MjEw
2,827
add a text classification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4", "events_url": "https://api.github.com/users/adeepH/events{/privacy}", "followers_url": "https://api.github.com/users/adeepH/followers", "following_url": "https://api.github.com/users/adeepH/following{/other_user}", "gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adeepH", "id": 46108405, "login": "adeepH", "node_id": "MDQ6VXNlcjQ2MTA4NDA1", "organizations_url": "https://api.github.com/users/adeepH/orgs", "received_events_url": "https://api.github.com/users/adeepH/received_events", "repos_url": "https://api.github.com/users/adeepH/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adeepH/subscriptions", "type": "User", "url": "https://api.github.com/users/adeepH" }
[]
closed
false
null
[]
null
[]
"2021-08-23T12:24:41Z"
"2021-08-23T15:51:18Z"
"2021-08-23T15:51:18Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2827.diff", "html_url": "https://github.com/huggingface/datasets/pull/2827", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2827.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2827" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2827/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2827/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3136/comments
https://api.github.com/repos/huggingface/datasets/issues/3136/events
https://github.com/huggingface/datasets/pull/3136
1,033,360,396
PR_kwDODunzps4tieFi
3,136
Fix script of Arabic Billion Words dataset to return all data
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-10-22T09:14:24Z"
"2021-10-22T13:28:41Z"
"2021-10-22T13:28:40Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3136.diff", "html_url": "https://github.com/huggingface/datasets/pull/3136", "merged_at": "2021-10-22T13:28:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/3136.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3136" }
The script has a bug and only parses and generates a portion of the entire dataset. This PR fixes the loading script so that is properly parses the entire dataset. Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurations except for one: - For "Youm7" we generate more examples (1172136) than the ones reported by the paper (1025027) | | Number of examples | Number of examples according to the source | |:---------------|-------------------:|-----:| | Alittihad | 349342 |349342 | | Almasryalyoum | 291723 |291723 | | Almustaqbal | 446873 |446873 | | Alqabas | 817274 |817274 | | Echoroukonline | 139732 |139732 | | Ryiadh | 858188 | 858188 | | Sabanews | 92149 |92149 | | SaudiYoum | 888068 |888068 | | Techreen | 314597 |314597 | | Youm7 | 1172136 |1025027 | Fix #3126.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3136/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3136/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4034/comments
https://api.github.com/repos/huggingface/datasets/issues/4034/events
https://github.com/huggingface/datasets/pull/4034
1,183,033,285
PR_kwDODunzps41IpN1
4,034
Fix null checksum in xcopa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2022-03-28T07:48:14Z"
"2022-03-28T08:06:14Z"
"2022-03-28T08:06:14Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4034.diff", "html_url": "https://github.com/huggingface/datasets/pull/4034", "merged_at": "2022-03-28T08:06:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/4034.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4034" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4034/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4034/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5827/comments
https://api.github.com/repos/huggingface/datasets/issues/5827/events
https://github.com/huggingface/datasets/issues/5827
1,698,891,246
I_kwDODunzps5lQwXu
5,827
load json dataset interrupt when dtype cast problem occured
{ "avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4", "events_url": "https://api.github.com/users/1014661165/events{/privacy}", "followers_url": "https://api.github.com/users/1014661165/followers", "following_url": "https://api.github.com/users/1014661165/following{/other_user}", "gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/1014661165", "id": 46060451, "login": "1014661165", "node_id": "MDQ6VXNlcjQ2MDYwNDUx", "organizations_url": "https://api.github.com/users/1014661165/orgs", "received_events_url": "https://api.github.com/users/1014661165/received_events", "repos_url": "https://api.github.com/users/1014661165/repos", "site_admin": false, "starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/1014661165/subscriptions", "type": "User", "url": "https://api.github.com/users/1014661165" }
[]
open
false
null
[]
null
[ "Indeed the JSON dataset builder raises an error when it encounters an unexpected type.\r\n\r\nThere's an old PR open to add away to ignore such elements though, if it can help: https://github.com/huggingface/datasets/pull/2838" ]
"2023-05-07T04:52:09Z"
"2023-05-10T12:32:28Z"
null
NONE
null
null
null
### Describe the bug i have a json like this: [ {"id": 1, "name": 1}, {"id": 2, "name": "Nan"}, {"id": 3, "name": 3}, .... ] ,which have several problematic rows data like row 2, then i load it with datasets.load_dataset('json', data_files=['xx.json'], split='train'), it will report like this: Generating train split: 0 examples [00:00, ? examples/s]Failed to read file 'C:\Users\gawinjunwu\Downloads\test\data\a.json' with error <class 'pyarrow.lib.ArrowInvalid'>: Could not convert '2' with type str: tried to convert to int64 Traceback (most recent call last): File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1858, in _prepare_split_single for _, table in generator: File "D:\Python3.9\lib\site-packages\datasets\packaged_modules\json\json.py", line 146, in _generate_tables raise ValueError(f"Not able to read records in the JSON file at {file}.") from None ValueError: Not able to read records in the JSON file at C:\Users\gawinjunwu\Downloads\test\data\a.json. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\Users\gawinjunwu\Downloads\test\scripts\a.py", line 4, in <module> ds = load_dataset('json', data_dir='data', split='train') File "D:\Python3.9\lib\site-packages\datasets\load.py", line 1797, in load_dataset builder_instance.download_and_prepare( File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 890, in download_and_prepare self._download_and_prepare( File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 985, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1746, in _prepare_split for job_id, done, content in self._prepare_split_single( File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1891, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset. Could datasets skip those problematic data row? ### Steps to reproduce the bug prepare a json file like this: [ {"id": 1, "name": 1}, {"id": 2, "name": "Nan"}, {"id": 3, "name": 3} ] then use datasets.load_dataset('json', dir_files=['xxx.json']) to load the json file ### Expected behavior skip the problematic data row and load row1 and row3 ### Environment info python3.9
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5827/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5827/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5178/comments
https://api.github.com/repos/huggingface/datasets/issues/5178/events
https://github.com/huggingface/datasets/issues/5178
1,430,800,810
I_kwDODunzps5VSEmq
5,178
Unable to download the Chinese `wikipedia`, the dumpstatus.json not found!
{ "avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4", "events_url": "https://api.github.com/users/beyondguo/events{/privacy}", "followers_url": "https://api.github.com/users/beyondguo/followers", "following_url": "https://api.github.com/users/beyondguo/following{/other_user}", "gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/beyondguo", "id": 37113676, "login": "beyondguo", "node_id": "MDQ6VXNlcjM3MTEzNjc2", "organizations_url": "https://api.github.com/users/beyondguo/orgs", "received_events_url": "https://api.github.com/users/beyondguo/received_events", "repos_url": "https://api.github.com/users/beyondguo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions", "type": "User", "url": "https://api.github.com/users/beyondguo" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "In the dumps page of the wiki (https://dumps.wikimedia.org/zhwiki/), I found the following dumps:\r\n```\r\nIndex of /zhwiki/\r\n[../](https://dumps.wikimedia.org/)\r\n[20220701/](https://dumps.wikimedia.org/zhwiki/20220701/) 21-Aug-2022 01:48 -\r\n[20220720/](https://dumps.wikimedia.org/zhwiki/20220720/) 02-Sep-2022 01:48 -\r\n[20220801/](https://dumps.wikimedia.org/zhwiki/20220801/) 21-Sep-2022 01:44 -\r\n[20220820/](https://dumps.wikimedia.org/zhwiki/20220820/) 01-Oct-2022 09:39 -\r\n[20220901/](https://dumps.wikimedia.org/zhwiki/20220901/) 20-Oct-2022 09:44 -\r\n[20220920/](https://dumps.wikimedia.org/zhwiki/20220920/) 23-Sep-2022 12:06 -\r\n[20221001/](https://dumps.wikimedia.org/zhwiki/20221001/) 04-Oct-2022 15:10 -\r\n[20221020/](https://dumps.wikimedia.org/zhwiki/20221020/) 01-Nov-2022 03:15 -\r\n[latest/](https://dumps.wikimedia.org/zhwiki/latest/) 01-Nov-2022 03:15 -\r\n```\r\n\r\nMaybe the older dumps are not available which caused the downloading failure? \r\n\r\nHowever, when I changed to the newer version:\r\n```\r\ndata = load_dataset('wikipedia', '20220701.zh', beam_runner='DirectRunner')\r\n```\r\n\r\nit shows:\r\n```\r\nValueError: BuilderConfig 20220701.zh not found. Available: ['20220301.aa', '20220301.ab', '20220301.ace', '20220301.ady', '20220301.af', '20220301.ak', '20220301.als', '20220301.am', '20220301.an', '20220301.ang', '20220301.ar', '20220301.arc', '20220301.arz', '20220301.as', '20220301.ast', '20220301.atj', '20220301.av', '20220301.ay', '20220301.az', '20220301.azb', '20220301.ba', '20220301.bar', '20220301.bat-smg', '20220301.bcl', '20220301.be', '20220301.be-x-old', '20220301.bg', '20220301.bh', '20220301.bi', '20220301.bjn', '20220301.bm', '20220301.bn', '20220301.bo', '20220301.bpy', '20220301.br', '20220301.bs', '20220301.bug', '20220301.bxr', '20220301.ca', '20220301.cbk-zam', '20220301.cdo', '20220301.ce', '20220301.ceb', '20220301.ch', '20220301.cho', '20220301.chr', '20220301.chy', '20220301.ckb', '20220301.co', '20220301.cr', '20220301.crh', '20220301.cs', '20220301.csb', '20220301.cu', '20220301.cv', '20220301.cy', '20220301.da', '20220301.de', '20220301.din', '20220301.diq', '20220301.dsb', '20220301.dty', '20220301.dv', '20220301.dz', '20220301.ee', '20220301.el', '20220301.eml', '20220301.en', '20220301.eo', '20220301.es', '20220301.et', '20220301.eu', '20220301.ext', '20220301.fa', '20220301.ff', '20220301.fi', '20220301.fiu-vro', '20220301.fj', '20220301.fo', '20220301.fr', '20220301.frp', '20220301.frr', '20220301.fur', '20220301.fy', '20220301.ga', '20220301.gag', '20220301.gan', '20220301.gd', '20220301.gl', '20220301.glk', '20220301.gn', '20220301.gom', '20220301.gor', '20220301.got', '20220301.gu', '20220301.gv', '20220301.ha', '20220301.hak', '20220301.haw', '20220301.he', '20220301.hi', '20220301.hif', '20220301.ho', '20220301.hr', '20220301.hsb', '20220301.ht', '20220301.hu', '20220301.hy', '20220301.ia', '20220301.id', '20220301.ie', '20220301.ig', '20220301.ii', '20220301.ik', '20220301.ilo', '20220301.inh', '20220301.io', '20220301.is', '20220301.it', '20220301.iu', '20220301.ja', '20220301.jam', '20220301.jbo', '20220301.jv', '20220301.ka', '20220301.kaa', '20220301.kab', '20220301.kbd', '20220301.kbp', '20220301.kg', '20220301.ki', '20220301.kj', '20220301.kk', '20220301.kl', '20220301.km', '20220301.kn', '20220301.ko', '20220301.koi', '20220301.krc', '20220301.ks', '20220301.ksh', '20220301.ku', '20220301.kv', '20220301.kw', '20220301.ky', '20220301.la', '20220301.lad', '20220301.lb', '20220301.lbe', '20220301.lez', '20220301.lfn', '20220301.lg', '20220301.li', '20220301.lij', '20220301.lmo', '20220301.ln', '20220301.lo', '20220301.lrc', '20220301.lt', '20220301.ltg', '20220301.lv', '20220301.mai', '20220301.map-bms', '20220301.mdf', '20220301.mg', '20220301.mh', '20220301.mhr', '20220301.mi', '20220301.min', '20220301.mk', '20220301.ml', '20220301.mn', '20220301.mr', '20220301.mrj', '20220301.ms', '20220301.mt', '20220301.mus', '20220301.mwl', '20220301.my', '20220301.myv', '20220301.mzn', '20220301.na', '20220301.nah', '20220301.nap', '20220301.nds', '20220301.nds-nl', '20220301.ne', '20220301.new', '20220301.ng', '20220301.nl', '20220301.nn', '20220301.no', '20220301.nov', '20220301.nrm', '20220301.nso', '20220301.nv', '20220301.ny', '20220301.oc', '20220301.olo', '20220301.om', '20220301.or', '20220301.os', '20220301.pa', '20220301.pag', '20220301.pam', '20220301.pap', '20220301.pcd', '20220301.pdc', '20220301.pfl', '20220301.pi', '20220301.pih', '20220301.pl', '20220301.pms', '20220301.pnb', '20220301.pnt', '20220301.ps', '20220301.pt', '20220301.qu', '20220301.rm', '20220301.rmy', '20220301.rn', '20220301.ro', '20220301.roa-rup', '20220301.roa-tara', '20220301.ru', '20220301.rue', '20220301.rw', '20220301.sa', '20220301.sah', '20220301.sat', '20220301.sc', '20220301.scn', '20220301.sco', '20220301.sd', '20220301.se', '20220301.sg', '20220301.sh', '20220301.si', '20220301.simple', '20220301.sk', '20220301.sl', '20220301.sm', '20220301.sn', '20220301.so', '20220301.sq', '20220301.sr', '20220301.srn', '20220301.ss', '20220301.st', '20220301.stq', '20220301.su', '20220301.sv', '20220301.sw', '20220301.szl', '20220301.ta', '20220301.tcy', '20220301.te', '20220301.tet', '20220301.tg', '20220301.th', '20220301.ti', '20220301.tk', '20220301.tl', '20220301.tn', '20220301.to', '20220301.tpi', '20220301.tr', '20220301.ts', '20220301.tt', '20220301.tum', '20220301.tw', '20220301.ty', '20220301.tyv', '20220301.udm', '20220301.ug', '20220301.uk', '20220301.ur', '20220301.uz', '20220301.ve', '20220301.vec', '20220301.vep', '20220301.vi', '20220301.vls', '20220301.vo', '20220301.wa', '20220301.war', '20220301.wo', '20220301.wuu', '20220301.xal', '20220301.xh', '20220301.xmf', '20220301.yi', '20220301.yo', '20220301.za', '20220301.zea', '20220301.zh', '20220301.zh-classical', '20220301.zh-min-nan', '20220301.zh-yue', '20220301.zu']\r\n```\r\n\r\nSo I guess adding the latest dumps versions to the `BuilderConfig` may solve the problem? But how to add it?", "Hi, @beyondguo, thanks for reporting.\r\n\r\nYou have all the information in the dataset card: https://huggingface.co/datasets/wikipedia\r\n\r\n> Then, you can load any subset of Wikipedia per language and per date this way:\r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> load_dataset(\"wikipedia\", language=\"sw\", date=\"20220120\", beam_runner=...) \r\n> ```\r\n> where you can pass as beam_runner any Apache Beam supported runner for (distributed) data processing (see [here](https://beam.apache.org/documentation/runners/capability-matrix/)). Pass \"DirectRunner\" to run it on your machine.\r\n> \r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nNote that you have to pass the language and date as keyword arguments, and the available dates depend on the language and can be found on Wikimedia website.", "Also:\r\n> Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n> ```python\r\n> load_dataset(\"wikipedia\", \"20220301.en\")\r\n> ```\r\n> The list of pre-processed subsets is:\r\n> - \"20220301.de\"\r\n> - \"20220301.en\"\r\n> - \"20220301.fr\"\r\n> - \"20220301.frr\"\r\n> - \"20220301.it\"\r\n> - \"20220301.simple\"" ]
"2022-11-01T03:17:55Z"
"2022-11-02T08:27:15Z"
"2022-11-02T08:24:29Z"
NONE
null
null
null
### Describe the bug I tried: `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` and `data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')` but both got: `FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json` the full report is: ``` FileNotFoundError Traceback (most recent call last) <ipython-input-13-d07c5021090c> in <module> 1 from datasets import load_dataset 2 ----> 3 data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')<?, ?it/s] /opt/conda/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1740 1741 # Download and prepare data -> 1742 builder_instance.download_and_prepare( 1743 download_config=download_config, 1744 download_mode=download_mode, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) 812 **download_and_prepare_kwargs, 813 } --> 814 self._download_and_prepare( 815 dl_manager=dl_manager, 816 verify_infos=verify_infos, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1645 options=beam_options, 1646 ) -> 1647 super()._download_and_prepare( 1648 dl_manager, verify_infos=False, pipeline=pipeline, **prepare_splits_kwargs 1649 ) # TODO handle verify_infos in beam datasets /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 881 split_dict = SplitDict(dataset_name=self.name) 882 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 883 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 884 885 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline) 943 info_url = _base_url(lang) + _INFO_FILE 944 # Use dictionary since testing mock always returns the same result. --> 945 downloaded_files = dl_manager.download_and_extract({"info": info_url}) 946 947 xml_urls = [] /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls) 431 extracted_path(s): `str`, extracted paths of given URL(s). 432 """ --> 433 return self.extract(self.download(url_or_urls)) 434 435 def get_recorded_sizes_checksums(self): /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download(self, url_or_urls) 308 309 start_time = datetime.now() --> 310 downloaded_path_or_paths = map_nested( 311 download_func, 312 url_or_urls, /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc) 427 num_proc = 1 428 if num_proc <= 1 or len(iterable) < parallel_min_length: --> 429 mapped = [ 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 428 if num_proc <= 1 or len(iterable) < parallel_min_length: 429 mapped = [ --> 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 432 ] /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 329 # Singleton first to spare some computation 330 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 331 return function(data_struct) 332 333 # Reduce logging to keep things readable in multiprocessing with tqdm /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config) 335 # append the relative path to the base_path 336 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 337 return cached_path(url_or_filename, download_config=download_config) 338 339 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]): /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 186 if is_remote_url(url_or_filename): 187 # URL, so get it from the cache (downloading if necessary) --> 188 output_path = get_from_cache( 189 url_or_filename, 190 cache_dir=cache_dir, /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 533 ) 534 elif response is not None and response.status_code == 404: --> 535 raise FileNotFoundError(f"Couldn't find file at {url}") 536 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 537 if head_error is not None: FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json ``` ### Steps to reproduce the bug `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` ### Expected behavior download the data ### Environment info python3.6 latest datasets/transformers version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5178/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5178/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/924/comments
https://api.github.com/repos/huggingface/datasets/issues/924/events
https://github.com/huggingface/datasets/pull/924
753,631,951
MDExOlB1bGxSZXF1ZXN0NTI5NjcyMzgw
924
Add DART
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "LGTM!" ]
"2020-11-30T16:42:37Z"
"2020-12-02T03:13:42Z"
"2020-12-02T03:13:41Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/924.diff", "html_url": "https://github.com/huggingface/datasets/pull/924", "merged_at": "2020-12-02T03:13:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/924.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/924" }
- **Name:** *DART* - **Description:** *DART is a large dataset for open-domain structured data record to text generation.* - **Paper:** *https://arxiv.org/abs/2007.02871* - **Data:** *https://github.com/Yale-LILY/dart#leaderboard* ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/924/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/924/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1874/comments
https://api.github.com/repos/huggingface/datasets/issues/1874/events
https://github.com/huggingface/datasets/pull/1874
807,786,094
MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy
1,874
Adding Europarl Bilingual dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucadiliello", "id": 23355969, "login": "lucadiliello", "node_id": "MDQ6VXNlcjIzMzU1OTY5", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "repos_url": "https://api.github.com/users/lucadiliello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "type": "User", "url": "https://api.github.com/users/lucadiliello" }
[]
closed
false
null
[]
null
[ "is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.", "I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos", "I've resolved some requirements, but I cannot create dummy data. The dataset works as follows: for each language pair `<lang1>-<lang2>` 3 files are downloaded:\r\n- dataset for `<lang1>`\r\n- dataset for `<lang2>`\r\n- alignments between `<lang1>` and `<lang2>`\r\n\r\nSuppose we work with the `bg-cs` language pair. Then, the dataset will download three `gzip` files which should be decompressed. I do not understand the relation between the folders created by the script to create dummy data and the original data provided by the download manager.", "Hi ! Indeed the data files structure of this dataset looks very specific.\r\nThe command `datasets-cli dummy_data ./datasets/europarl_bilingual` shows some instructions for each split but let me add more details.\r\n\r\nFirst things to know is that the dummy data files need to be uncompressed data, so for example for the file `bg.zip` you should actually have one folder with all the xml files in it instead. In the same way, `bg-cs.xml.gz` must be replaced by an actual uncompressed xml file.\r\n\r\nLet's take the bg-cs config as an example. To make the dummy data you need to:\r\n- go to `./datasets/europarl_bilingual/dummy/bg-cs/8.0.0` and create a folder named `dummy_data`. Then go inside this folder\r\n- create a text file named `bg-cs.xml.gz` containing xml content (so without .gz compression). The xml content must have the same structure as the original `bg-cs.zml` but only include 1 `linkGrp` entry. You can pick one entry from the original `bg-cs.xml` file. Let's say this entry is about this file: `ep-06-01-16-003.xml`\r\n- create a folder named `bg.zip` and inside this folder add one file Europarl/raw/bg/ep-06-01-16-003.xml. You can pick the xml file from the original `bg.zip` archive.\r\n- create a folder named `cs.zip` and inside this folder add one file Europarl/raw/cs/ep-06-01-16-003.xml. You can pick the xml file from the original `cs.zip` archive.\r\n- zip the `dummy_data` into `dummy_data.zip`\r\n\r\nAt this point you have dummy data files to generate 1 example which is what we want to be able to test the dataset script `europarl_bilingual.py` with pytest. \r\n\r\nIn particular this will make this test pass:\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_europarl_bilingual\r\n```\r\n\r\nIdeally it would be awesome to have dummy data for all the different configs so if we manage to make a script that generates all of it automatically that would be perfect. However since the structure is not trivial, another option would be to only have the dummy data for only 1 or 2 configs, like what we do for [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) for example. In `bible_para` only a few configurations are tested. As you can see there is only 6 configs in the `BUILDER_CONFIGS` attribute. All the other configs can still be used, here is what is said inside the dataset card of bible_para:\r\n```\r\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\r\nYou can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/bible-uedin.php\r\nE.g.\r\n\r\n`dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")`\r\n```\r\nIn this case the configuration \"fi-hi\" is simply created on the fly, instead of being picked from the `BUILDER_CONFIGS` list.\r\n\r\nI hope this helps, let me know if you have questions or if I can help", "I already created the scripts to create reduced versions of the data. What I didn't understand was how to put files in the dummy_data folder because, as you noticed, some file decompress to a nested tree structure. I will now try again with your suggestions!", "Is there something else I should do? If not can this be integrated?", "Thanks a lot !!\r\nSince the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :/\r\nSo I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:\r\n\r\n`dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")`" ]
"2021-02-13T17:02:04Z"
"2021-03-04T10:38:22Z"
"2021-03-04T10:38:22Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1874.diff", "html_url": "https://github.com/huggingface/datasets/pull/1874", "merged_at": "2021-03-04T10:38:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/1874.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1874" }
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences). I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1874/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2128/comments
https://api.github.com/repos/huggingface/datasets/issues/2128/events
https://github.com/huggingface/datasets/issues/2128
843,023,910
MDU6SXNzdWU4NDMwMjM5MTA=
2,128
Dialogue action slot name and value are reversed in MultiWoZ 2.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4", "events_url": "https://api.github.com/users/adamlin120/events{/privacy}", "followers_url": "https://api.github.com/users/adamlin120/followers", "following_url": "https://api.github.com/users/adamlin120/following{/other_user}", "gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adamlin120", "id": 31605305, "login": "adamlin120", "node_id": "MDQ6VXNlcjMxNjA1MzA1", "organizations_url": "https://api.github.com/users/adamlin120/orgs", "received_events_url": "https://api.github.com/users/adamlin120/received_events", "repos_url": "https://api.github.com/users/adamlin120/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions", "type": "User", "url": "https://api.github.com/users/adamlin120" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) " ]
"2021-03-29T06:34:02Z"
"2021-03-31T12:48:01Z"
"2021-03-31T12:48:01Z"
CONTRIBUTOR
null
null
null
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2128/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1243/comments
https://api.github.com/repos/huggingface/datasets/issues/1243/events
https://github.com/huggingface/datasets/pull/1243
758,378,904
MDExOlB1bGxSZXF1ZXN0NTMzNTYxNDAx
1,243
Add Google Noun Verb Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhishekkrthakur", "id": 1183441, "login": "abhishekkrthakur", "node_id": "MDQ6VXNlcjExODM0NDE=", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "type": "User", "url": "https://api.github.com/users/abhishekkrthakur" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
"2020-12-07T10:26:05Z"
"2023-09-24T09:40:54Z"
"2022-10-03T09:39:37Z"
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/1243.diff", "html_url": "https://github.com/huggingface/datasets/pull/1243", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1243.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1243" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1243/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1243/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1877/comments
https://api.github.com/repos/huggingface/datasets/issues/1877/events
https://github.com/huggingface/datasets/issues/1877
808,462,272
MDU6SXNzdWU4MDg0NjIyNzI=
1,877
Allow concatenation of both in-memory and on-disk datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that concatenating two tables into one doesn't double the memory used (`total_allocated_bytes()` stays the same).", "Hi @lhoestq @albertvillanova,\r\n\r\nI checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets? \r\nBased on my understanding, it is something like this, please correct me if I am wrong:\r\n1. For in-memory datasets, we don't have any dataset files so the entire dataset is pickled to the cache during loading, and then whenever required it is unpickled .\r\n2. For on-disk/memory-mapped datasets, we have the data files provided, so they can be re-loaded from the paths, and only the file-paths are stored while pickling.\r\n\r\nIf this is correct, will the feature also handle pickling/unpickling of a concatenated dataset? Will this be cached?\r\n\r\nThis also leads me to ask whether datasets are chunked during pickling? \r\n\r\nThanks,\r\nGunjan", "Hi ! Yes you're totally right about your two points :)\r\n\r\nAnd in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. This allows to pickle/unpickle concatenated datasets", "Hi @lhoestq\r\n\r\nThanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose?", "Hi @lhoestq,\r\n\r\nWill the `add_item` feature also help with lazy writing (or no caching) during `map`/`filter`?", "> Can you explain where the issue of the double memory may arise?\r\n\r\nWe have to keep each block (in-memory vs memory mapped) separated in order to be able to reload them with pickle.\r\nOn the other hand we also need to have the full table from mixed in-memory and memory mapped data in order to iterate or extract data conveniently. That means that each block is accessible twice: once in the full table, and once in the separated blocks. But since pyarrow tables concatenation doesn't double the memory, then building the full table doesn't cost memory which is what we want :)\r\n\r\n> Also, why is the existing concatenate_datasets not sufficient for this purpose?\r\n\r\nThe existing `concatenate_datasets` doesn't support having both in-memory and memory mapped data together (there's no fancy block separation logic). It works for datasets fully in-memory or fully memory mapped but not a mix of the two.\r\n\r\n> Will the add_item feature also help with lazy writing (or no caching) during map/filter?\r\n\r\nIt will enable the implementation of the fast, masked filter from this discussion: https://github.com/huggingface/datasets/issues/1949\r\nHowever I don't think this will affect map." ]
"2021-02-15T11:39:46Z"
"2021-03-26T16:51:58Z"
"2021-03-26T16:51:58Z"
MEMBER
null
null
null
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table. Then the dataset would be the concatenation of all these tables. Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data. If you have some ideas you would like to share about the design/API feel free to do so :) cc @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1877/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1552/comments
https://api.github.com/repos/huggingface/datasets/issues/1552/events
https://github.com/huggingface/datasets/pull/1552
765,664,411
MDExOlB1bGxSZXF1ZXN0NTM5MDI2MzAx
1,552
Added OPUS ParaCrawl
{ "avatar_url": "https://avatars.githubusercontent.com/u/22396042?v=4", "events_url": "https://api.github.com/users/rkc007/events{/privacy}", "followers_url": "https://api.github.com/users/rkc007/followers", "following_url": "https://api.github.com/users/rkc007/following{/other_user}", "gists_url": "https://api.github.com/users/rkc007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rkc007", "id": 22396042, "login": "rkc007", "node_id": "MDQ6VXNlcjIyMzk2MDQy", "organizations_url": "https://api.github.com/users/rkc007/orgs", "received_events_url": "https://api.github.com/users/rkc007/received_events", "repos_url": "https://api.github.com/users/rkc007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rkc007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rkc007/subscriptions", "type": "User", "url": "https://api.github.com/users/rkc007" }
[]
closed
false
null
[]
null
[ "@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.", "@rkc007 @lhoestq just noticed a dataset named para_crawl has been added a long time ago: #91.", "They're not exactly the same so it's ok to have both.\r\n\r\nEspecially the `para_crawl` that already exists only uses the text from the ParaCrawl release 4.", "Could you regenerate the dataset_infos.json @rkc007 please ?\r\nIt looks like it has some issues due to the dataset class name change", "@SBrandeis Thank you for suggesting changes. I made the changes you suggested. \r\n\r\n@lhoestq I generated `dataset_infos.json` again. I ran both tests(Dummy & Real data) and it passed. Can you please review it again?", "merging since the CI is fixed on master" ]
"2020-12-13T21:44:29Z"
"2020-12-21T09:50:26Z"
"2020-12-21T09:50:25Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1552.diff", "html_url": "https://github.com/huggingface/datasets/pull/1552", "merged_at": "2020-12-21T09:50:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/1552.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1552" }
Dataset : http://opus.nlpl.eu/ParaCrawl.php
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1552/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1552/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4487/comments
https://api.github.com/repos/huggingface/datasets/issues/4487/events
https://github.com/huggingface/datasets/pull/4487
1,270,525,163
PR_kwDODunzps45nm5J
4,487
Support streaming UDHR dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-06-14T09:33:33Z"
"2022-06-15T05:09:22Z"
"2022-06-15T04:59:49Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4487.diff", "html_url": "https://github.com/huggingface/datasets/pull/4487", "merged_at": "2022-06-15T04:59:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/4487.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4487" }
This PR: - Adds support for streaming UDHR dataset - Adds the BCP 47 language code as feature
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4487/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4487/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1089/comments
https://api.github.com/repos/huggingface/datasets/issues/1089/events
https://github.com/huggingface/datasets/pull/1089
756,823,690
MDExOlB1bGxSZXF1ZXN0NTMyMzA0MDM2
1,089
add sharc_modified
{ "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patil-suraj", "id": 27137566, "login": "patil-suraj", "node_id": "MDQ6VXNlcjI3MTM3NTY2", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "repos_url": "https://api.github.com/users/patil-suraj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "type": "User", "url": "https://api.github.com/users/patil-suraj" }
[]
closed
false
null
[]
null
[]
"2020-12-04T05:49:49Z"
"2020-12-04T10:41:30Z"
"2020-12-04T10:31:44Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1089.diff", "html_url": "https://github.com/huggingface/datasets/pull/1089", "merged_at": "2020-12-04T10:31:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/1089.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1089" }
Adding modified ShARC dataset https://github.com/nikhilweee/neural-conv-qa
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1089/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1089/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/365/comments
https://api.github.com/repos/huggingface/datasets/issues/365/events
https://github.com/huggingface/datasets/issues/365
653,845,964
MDU6SXNzdWU2NTM4NDU5NjQ=
365
How to augment data ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/astariul", "id": 43774355, "login": "astariul", "node_id": "MDQ6VXNlcjQzNzc0MzU1", "organizations_url": "https://api.github.com/users/astariul/orgs", "received_events_url": "https://api.github.com/users/astariul/received_events", "repos_url": "https://api.github.com/users/astariul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "type": "User", "url": "https://api.github.com/users/astariul" }
[]
closed
false
null
[]
null
[ "Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?", "Some samples in the dataset are too long, I want to divide them in several samples.", "Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for augmentation.\r\n\r\nLet me know if you think there should be another way to do it. Or feel free to close the issue otherwise.", "It just feels awkward to use map to augment data. Also it means it's not possible to augment data in a non-batched way.\r\n\r\nBut to be honest I have no idea of a good API...", "Or for non-batched samples, how about returning a tuple ?\r\n\r\n```python\r\ndef aug(sample):\r\n # Simply copy the existing data to have x2 amount of data\r\n return sample, sample\r\n\r\ndataset = dataset.map(aug)\r\n```\r\n\r\nIt feels really natural and easy, but :\r\n\r\n* it means the behavior with batched data is different\r\n* I don't know how doable it is backend-wise\r\n\r\n@lhoestq ", "As we're working with arrow's columnar format we prefer to play with batches that are dictionaries instead of tuples.\r\nIf we have tuple it implies to re-format the data each time we want to write to arrow, which can lower the speed of map for example.\r\n\r\nIt's also a matter of coherence, as we don't want users to be confused whether they have to return dictionaries for some functions and tuples for others when they're doing batches." ]
"2020-07-09T07:52:37Z"
"2020-07-10T09:12:07Z"
"2020-07-10T08:22:15Z"
NONE
null
null
null
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=True) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/365/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/365/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3002/comments
https://api.github.com/repos/huggingface/datasets/issues/3002/events
https://github.com/huggingface/datasets/pull/3002
1,014,120,524
PR_kwDODunzps4smCNO
3,002
Remove a reference to the open Arrow file when deleting a TF dataset created with to_tf_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "@lhoestq The test passes even without the try/except block!", "Hey, I'm a little late because I was caught up in the course work, but I double-checked this and it looks great. Thanks for fixing!" ]
"2021-10-02T17:44:09Z"
"2021-10-13T11:48:00Z"
"2021-10-13T09:03:23Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3002.diff", "html_url": "https://github.com/huggingface/datasets/pull/3002", "merged_at": "2021-10-13T09:03:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3002.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3002" }
This [comment](https://github.com/huggingface/datasets/issues/2934#issuecomment-922970919) explains the issue. This PR fixes that with a `weakref` callback, and additionally: * renames `TensorflowDatasetMixIn` to `TensorflowDatasetMixin` for consistency * correctly indents `TensorflowDatasetMixin`'s docstring * replaces `tf.data.AUTOTUNE` with `tf.data.experimental.AUTOTUNE` (we support TF>=2.2 according to the [setup.py](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/setup.py#L188) and `AUTOTUNE` has been moved to the experimental part of `tf.data` in 1.X if I'm not mistaken) Fixes #2934
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3002/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3002/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6491/comments
https://api.github.com/repos/huggingface/datasets/issues/6491/events
https://github.com/huggingface/datasets/pull/6491
2,037,690,643
PR_kwDODunzps5hyiTY
6,491
Fix metrics dead link
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6491). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
"2023-12-12T12:51:49Z"
"2023-12-12T12:58:25Z"
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6491.diff", "html_url": "https://github.com/huggingface/datasets/pull/6491", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6491.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6491" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6491/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/174/comments
https://api.github.com/repos/huggingface/datasets/issues/174/events
https://github.com/huggingface/datasets/issues/174
621,928,403
MDU6SXNzdWU2MjE5Mjg0MDM=
174
nlp.load_dataset('xsum') -> TypeError
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
[]
closed
false
null
[]
null
[]
"2020-05-20T16:59:09Z"
"2020-05-20T17:43:46Z"
"2020-05-20T17:43:46Z"
CONTRIBUTOR
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/174/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/174/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6095/comments
https://api.github.com/repos/huggingface/datasets/issues/6095/events
https://github.com/huggingface/datasets/pull/6095
1,826,496,967
PR_kwDODunzps5WqJtr
6,095
Fix deprecation of errors in TextConfig
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012497 / 0.011353 (0.001144) | 0.005355 / 0.011008 (-0.005654) | 0.106018 / 0.038508 (0.067510) | 0.093069 / 0.023109 (0.069960) | 0.394699 / 0.275898 (0.118801) | 0.449723 / 0.323480 (0.126243) | 0.006434 / 0.007986 (-0.001552) | 0.004187 / 0.004328 (-0.000141) | 0.079620 / 0.004250 (0.075370) | 0.062513 / 0.037052 (0.025460) | 0.410305 / 0.258489 (0.151816) | 0.467231 / 0.293841 (0.173390) | 0.048130 / 0.128546 (-0.080416) | 0.013747 / 0.075646 (-0.061899) | 0.357979 / 0.419271 (-0.061293) | 0.064764 / 0.043533 (0.021231) | 0.411029 / 0.255139 (0.155890) | 0.454734 / 0.283200 (0.171534) | 0.037215 / 0.141683 (-0.104468) | 1.801331 / 1.452155 (0.349176) | 1.951628 / 1.492716 (0.458912) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231073 / 0.018006 (0.213067) | 0.564179 / 0.000490 (0.563689) | 0.000947 / 0.000200 (0.000747) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030629 / 0.037411 (-0.006783) | 0.092522 / 0.014526 (0.077996) | 0.109781 / 0.176557 (-0.066775) | 0.183185 / 0.737135 (-0.553950) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.600095 / 0.215209 (0.384886) | 6.072868 / 2.077655 (3.995213) | 2.684109 / 1.504120 (1.179989) | 2.436204 / 1.541195 (0.895010) | 2.514667 / 1.468490 (1.046177) | 0.865455 / 4.584777 (-3.719322) | 5.245561 / 3.745712 (1.499849) | 5.628688 / 5.269862 (0.358826) | 3.457343 / 4.565676 (-1.108333) | 0.107563 / 0.424275 (-0.316712) | 0.008803 / 0.007607 (0.001196) | 0.754014 / 0.226044 (0.527970) | 7.341226 / 2.268929 (5.072297) | 3.482090 / 55.444624 (-51.962534) | 2.726071 / 6.876477 (-4.150406) | 3.168494 / 2.142072 (1.026422) | 1.023517 / 4.805227 (-3.781710) | 0.207440 / 6.500664 (-6.293224) | 0.073642 / 0.075469 (-0.001827) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.588636 / 1.841788 (-0.253152) | 23.305257 / 8.074308 (15.230949) | 22.071476 / 10.191392 (11.880084) | 0.242044 / 0.680424 (-0.438379) | 0.028830 / 0.534201 (-0.505371) | 0.461414 / 0.579283 (-0.117869) | 0.591024 / 0.434364 (0.156660) | 0.548984 / 0.540337 (0.008646) | 0.783318 / 1.386936 (-0.603618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008724 / 0.011353 (-0.002629) | 0.004638 / 0.011008 (-0.006371) | 0.081024 / 0.038508 (0.042516) | 0.077533 / 0.023109 (0.054423) | 0.444827 / 0.275898 (0.168929) | 0.507812 / 0.323480 (0.184332) | 0.006017 / 0.007986 (-0.001968) | 0.004204 / 0.004328 (-0.000124) | 0.082154 / 0.004250 (0.077904) | 0.063818 / 0.037052 (0.026765) | 0.463468 / 0.258489 (0.204979) | 0.536784 / 0.293841 (0.242943) | 0.046393 / 0.128546 (-0.082153) | 0.014349 / 0.075646 (-0.061298) | 0.089213 / 0.419271 (-0.330059) | 0.058313 / 0.043533 (0.014780) | 0.463674 / 0.255139 (0.208535) | 0.495865 / 0.283200 (0.212665) | 0.036586 / 0.141683 (-0.105096) | 1.801601 / 1.452155 (0.349447) | 1.871219 / 1.492716 (0.378502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273411 / 0.018006 (0.255405) | 0.531745 / 0.000490 (0.531255) | 0.000424 / 0.000200 (0.000224) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037689 / 0.037411 (0.000278) | 0.109544 / 0.014526 (0.095019) | 0.124053 / 0.176557 (-0.052504) | 0.179960 / 0.737135 (-0.557175) | 0.118218 / 0.296338 (-0.178120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639859 / 0.215209 (0.424650) | 6.347385 / 2.077655 (4.269730) | 2.910188 / 1.504120 (1.406068) | 2.698821 / 1.541195 (1.157626) | 2.802652 / 1.468490 (1.334161) | 0.816109 / 4.584777 (-3.768668) | 5.190313 / 3.745712 (1.444601) | 4.642684 / 5.269862 (-0.627178) | 2.948092 / 4.565676 (-1.617584) | 0.095877 / 0.424275 (-0.328398) | 0.009631 / 0.007607 (0.002024) | 0.779136 / 0.226044 (0.553091) | 7.611586 / 2.268929 (5.342658) | 3.760804 / 55.444624 (-51.683820) | 3.139355 / 6.876477 (-3.737122) | 3.419660 / 2.142072 (1.277587) | 1.036397 / 4.805227 (-3.768831) | 0.224015 / 6.500664 (-6.276649) | 0.084037 / 0.075469 (0.008568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.710608 / 1.841788 (-0.131179) | 24.447646 / 8.074308 (16.373338) | 21.345322 / 10.191392 (11.153930) | 0.232383 / 0.680424 (-0.448040) | 0.026381 / 0.534201 (-0.507820) | 0.475995 / 0.579283 (-0.103289) | 0.611939 / 0.434364 (0.177575) | 0.541441 / 0.540337 (0.001104) | 0.742796 / 1.386936 (-0.644140) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7929929525e734f7232cfc68d1d22fb8d53c54a3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006140 / 0.011353 (-0.005213) | 0.003664 / 0.011008 (-0.007344) | 0.080765 / 0.038508 (0.042257) | 0.065009 / 0.023109 (0.041900) | 0.312787 / 0.275898 (0.036889) | 0.354637 / 0.323480 (0.031157) | 0.004846 / 0.007986 (-0.003140) | 0.003019 / 0.004328 (-0.001310) | 0.062823 / 0.004250 (0.058573) | 0.050446 / 0.037052 (0.013394) | 0.314478 / 0.258489 (0.055989) | 0.360206 / 0.293841 (0.066365) | 0.027282 / 0.128546 (-0.101265) | 0.008024 / 0.075646 (-0.067622) | 0.262125 / 0.419271 (-0.157146) | 0.045793 / 0.043533 (0.002260) | 0.310508 / 0.255139 (0.055369) | 0.340899 / 0.283200 (0.057699) | 0.021850 / 0.141683 (-0.119833) | 1.510791 / 1.452155 (0.058636) | 1.570661 / 1.492716 (0.077944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.449310 / 0.000490 (0.448820) | 0.004556 / 0.000200 (0.004356) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023689 / 0.037411 (-0.013722) | 0.076316 / 0.014526 (0.061791) | 0.084800 / 0.176557 (-0.091757) | 0.153154 / 0.737135 (-0.583981) | 0.086467 / 0.296338 (-0.209871) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432254 / 0.215209 (0.217045) | 4.305098 / 2.077655 (2.227443) | 2.304267 / 1.504120 (0.800147) | 2.139503 / 1.541195 (0.598309) | 2.220414 / 1.468490 (0.751924) | 0.498595 / 4.584777 (-4.086182) | 3.058593 / 3.745712 (-0.687119) | 4.324501 / 5.269862 (-0.945361) | 2.667731 / 4.565676 (-1.897946) | 0.059917 / 0.424275 (-0.364358) | 0.006829 / 0.007607 (-0.000778) | 0.504608 / 0.226044 (0.278564) | 5.044480 / 2.268929 (2.775552) | 2.753080 / 55.444624 (-52.691545) | 2.449265 / 6.876477 (-4.427212) | 2.635113 / 2.142072 (0.493040) | 0.590760 / 4.805227 (-4.214467) | 0.130133 / 6.500664 (-6.370532) | 0.062759 / 0.075469 (-0.012710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267014 / 1.841788 (-0.574773) | 18.562890 / 8.074308 (10.488581) | 13.991257 / 10.191392 (3.799865) | 0.147108 / 0.680424 (-0.533315) | 0.017216 / 0.534201 (-0.516985) | 0.330317 / 0.579283 (-0.248966) | 0.351328 / 0.434364 (-0.083036) | 0.381097 / 0.540337 (-0.159241) | 0.558718 / 1.386936 (-0.828218) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006385 / 0.011353 (-0.004967) | 0.003668 / 0.011008 (-0.007340) | 0.062581 / 0.038508 (0.024073) | 0.067006 / 0.023109 (0.043896) | 0.428465 / 0.275898 (0.152567) | 0.466106 / 0.323480 (0.142626) | 0.005806 / 0.007986 (-0.002180) | 0.003117 / 0.004328 (-0.001212) | 0.063554 / 0.004250 (0.059303) | 0.054404 / 0.037052 (0.017352) | 0.431168 / 0.258489 (0.172679) | 0.467578 / 0.293841 (0.173737) | 0.027779 / 0.128546 (-0.100767) | 0.008055 / 0.075646 (-0.067592) | 0.067718 / 0.419271 (-0.351554) | 0.043042 / 0.043533 (-0.000491) | 0.425926 / 0.255139 (0.170787) | 0.453699 / 0.283200 (0.170500) | 0.023495 / 0.141683 (-0.118187) | 1.435356 / 1.452155 (-0.016799) | 1.509340 / 1.492716 (0.016624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242322 / 0.018006 (0.224316) | 0.446865 / 0.000490 (0.446376) | 0.001079 / 0.000200 (0.000879) | 0.000065 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025376 / 0.037411 (-0.012035) | 0.079373 / 0.014526 (0.064847) | 0.088554 / 0.176557 (-0.088002) | 0.141026 / 0.737135 (-0.596109) | 0.090666 / 0.296338 (-0.205672) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434800 / 0.215209 (0.219590) | 4.314491 / 2.077655 (2.236836) | 2.320688 / 1.504120 (0.816568) | 2.163941 / 1.541195 (0.622747) | 2.292576 / 1.468490 (0.824086) | 0.500226 / 4.584777 (-4.084551) | 3.114604 / 3.745712 (-0.631108) | 4.206997 / 5.269862 (-1.062864) | 2.461126 / 4.565676 (-2.104551) | 0.057717 / 0.424275 (-0.366558) | 0.006989 / 0.007607 (-0.000618) | 0.515623 / 0.226044 (0.289579) | 5.155301 / 2.268929 (2.886372) | 2.733589 / 55.444624 (-52.711035) | 2.542111 / 6.876477 (-4.334366) | 2.697035 / 2.142072 (0.554963) | 0.594213 / 4.805227 (-4.211014) | 0.128537 / 6.500664 (-6.372127) | 0.065223 / 0.075469 (-0.010246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306738 / 1.841788 (-0.535050) | 19.065370 / 8.074308 (10.991062) | 14.242096 / 10.191392 (4.050704) | 0.146177 / 0.680424 (-0.534246) | 0.017186 / 0.534201 (-0.517015) | 0.337224 / 0.579283 (-0.242059) | 0.349997 / 0.434364 (-0.084367) | 0.390408 / 0.540337 (-0.149930) | 0.524597 / 1.386936 (-0.862339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69ec36948b0ef1f194e9dcd43ec53a50b7708962 \"CML watermark\")\n" ]
"2023-07-28T14:08:37Z"
"2023-07-31T05:26:32Z"
"2023-07-31T05:17:38Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6095.diff", "html_url": "https://github.com/huggingface/datasets/pull/6095", "merged_at": "2023-07-31T05:17:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6095.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6095" }
This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by: - #5974 ```python In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict") --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-701c27131a5d> in <module> ----> 1 ds = load_dataset("text", data_files="test.txt", errors="strict") ~/huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, ~/huggingface/datasets/src/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1830 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1831 # Instantiate the dataset builder -> 1832 builder_instance: DatasetBuilder = builder_cls( 1833 cache_dir=cache_dir, 1834 dataset_name=dataset_name, ~/huggingface/datasets/src/datasets/builder.py in __init__(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, use_auth_token, repo_id, data_files, data_dir, storage_options, writer_batch_size, name, **config_kwargs) 371 if data_dir is not None: 372 config_kwargs["data_dir"] = data_dir --> 373 self.config, self.config_id = self._create_builder_config( 374 config_name=config_name, 375 custom_features=features, ~/huggingface/datasets/src/datasets/builder.py in _create_builder_config(self, config_name, custom_features, **config_kwargs) 550 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 551 config_kwargs["version"] = self.VERSION --> 552 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 553 554 # otherwise use the config_kwargs to overwrite the attributes TypeError: __init__() got an unexpected keyword argument 'errors' ``` Similar to: - #6094
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6095/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6095/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3617/comments
https://api.github.com/repos/huggingface/datasets/issues/3617/events
https://github.com/huggingface/datasets/pull/3617
1,111,938,691
PR_kwDODunzps4xdb8K
3,617
PR for the CFPB Consumer Complaints dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42403093?v=4", "events_url": "https://api.github.com/users/kayvane1/events{/privacy}", "followers_url": "https://api.github.com/users/kayvane1/followers", "following_url": "https://api.github.com/users/kayvane1/following{/other_user}", "gists_url": "https://api.github.com/users/kayvane1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kayvane1", "id": 42403093, "login": "kayvane1", "node_id": "MDQ6VXNlcjQyNDAzMDkz", "organizations_url": "https://api.github.com/users/kayvane1/orgs", "received_events_url": "https://api.github.com/users/kayvane1/received_events", "repos_url": "https://api.github.com/users/kayvane1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kayvane1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kayvane1/subscriptions", "type": "User", "url": "https://api.github.com/users/kayvane1" }
[]
closed
false
null
[]
null
[ "> Nice ! Thanks for adding this dataset :)\n> \n> \n> \n> I left a few comments:\n\nThanks!\n\nI'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available for this dataset as it was erroring. \n\nI'll rerun it and share the errors and try debug", "Hey @lhoestq ,\r\n\r\nWhen I use this dataset as part of my project, I'm using this method\r\n\r\n`text_dataset = text_dataset['train'].train_test_split(test_size=0.2)`\r\n\r\nto create a train and test split as this dataset doesn't have one. \r\n\r\nCan I add this directly in the script itself somehow, or is it better to give users the flexibility to slice and split their datasets after loading?", "> I'd be interested in contributing to the core codebase - I had to go down the custom loading approach because I couldn't pull this dataset in using the load_dataset() method. Using either the json or csv files available for this dataset as it was erroring.\r\n>\r\n> I'll rerun it and share the errors and try debug\r\n\r\nCool ! Let me know if you have questions or if I can help :)\r\n\r\n> Can I add this directly in the script itself somehow, or is it better to give users the flexibility to slice and split their datasets after loading?\r\n\r\nUsually we let the users the flexibility to split the datasets themselves (unless the dataset is already split, or if there is already a standard way to split it in the papers that use it)", "Thanks Quentin!\r\nAll okay to merge now?", "Thanks for the feedback Quentin and Mario - implemented all changes :)\r\n![Screenshot 2022-01-31 at 23 11 20](https://user-images.githubusercontent.com/42403093/151889262-30737feb-ac9c-4c5a-9326-9812db1d05bc.png)\r\n", "Hey @lhoestq / @mariosasko \r\nAny other changes required to merge? 🤗", "Hi ! Thanks and sorry for the late response \r\n\r\nIt looks very good ! The CI is still failing because it can't file the dummy_data.zip file, you can fix that by moving `datasets/consumer-finance-complaints/dummy/1.0.0/dummy_data.zip` to `datasets/consumer-finance-complaints/dummy/0.0.0/dummy_data.zip` and it should be all good !", "@lhoestq - hopefully that should do it!\r\n" ]
"2022-01-23T17:47:12Z"
"2022-02-07T21:08:31Z"
"2022-02-07T21:08:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3617.diff", "html_url": "https://github.com/huggingface/datasets/pull/3617", "merged_at": "2022-02-07T21:08:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/3617.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3617" }
Think I followed all the steps but please let me know if anything needs changing or any improvements I can make to the code quality
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3617/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3617/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4563/comments
https://api.github.com/repos/huggingface/datasets/issues/4563/events
https://github.com/huggingface/datasets/pull/4563
1,283,914,383
PR_kwDODunzps46UmZQ
4,563
Support streaming allocine dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-06-24T15:55:03Z"
"2022-06-24T16:54:57Z"
"2022-06-24T16:44:41Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4563.diff", "html_url": "https://github.com/huggingface/datasets/pull/4563", "merged_at": "2022-06-24T16:44:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4563.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4563" }
Support streaming allocine dataset. Fix #4562.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4563/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/155/comments
https://api.github.com/repos/huggingface/datasets/issues/155/events
https://github.com/huggingface/datasets/pull/155
620,067,946
MDExOlB1bGxSZXF1ZXN0NDE5Mzg1ODM0
155
Include more links in README, fix typos
{ "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "events_url": "https://api.github.com/users/bharatr21/events{/privacy}", "followers_url": "https://api.github.com/users/bharatr21/followers", "following_url": "https://api.github.com/users/bharatr21/following{/other_user}", "gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bharatr21", "id": 13381361, "login": "bharatr21", "node_id": "MDQ6VXNlcjEzMzgxMzYx", "organizations_url": "https://api.github.com/users/bharatr21/orgs", "received_events_url": "https://api.github.com/users/bharatr21/received_events", "repos_url": "https://api.github.com/users/bharatr21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions", "type": "User", "url": "https://api.github.com/users/bharatr21" }
[]
closed
false
null
[]
null
[ "I fixed a conflict :) thanks !" ]
"2020-05-18T09:47:08Z"
"2020-05-28T08:31:57Z"
"2020-05-28T08:31:57Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/155.diff", "html_url": "https://github.com/huggingface/datasets/pull/155", "merged_at": "2020-05-28T08:31:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/155.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/155" }
Include more links and fix typos in README
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/155/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/155/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2407/comments
https://api.github.com/repos/huggingface/datasets/issues/2407/events
https://github.com/huggingface/datasets/issues/2407
903,111,755
MDU6SXNzdWU5MDMxMTE3NTU=
2,407
.map() function got an unexpected keyword argument 'cache_file_name'
{ "avatar_url": "https://avatars.githubusercontent.com/u/7390482?v=4", "events_url": "https://api.github.com/users/cindyxinyiwang/events{/privacy}", "followers_url": "https://api.github.com/users/cindyxinyiwang/followers", "following_url": "https://api.github.com/users/cindyxinyiwang/following{/other_user}", "gists_url": "https://api.github.com/users/cindyxinyiwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cindyxinyiwang", "id": 7390482, "login": "cindyxinyiwang", "node_id": "MDQ6VXNlcjczOTA0ODI=", "organizations_url": "https://api.github.com/users/cindyxinyiwang/orgs", "received_events_url": "https://api.github.com/users/cindyxinyiwang/received_events", "repos_url": "https://api.github.com/users/cindyxinyiwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cindyxinyiwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cindyxinyiwang/subscriptions", "type": "User", "url": "https://api.github.com/users/cindyxinyiwang" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558", "Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?\r\n\r\nIf you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).\r\nIn this case, since there are several datasets in the dict, the `DatasetDict.map` method requires a `cache_file_names` argument (with an 's'), so that you can provide one file name per split.", "I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you!" ]
"2021-05-27T01:54:26Z"
"2021-05-27T13:46:40Z"
"2021-05-27T13:46:40Z"
NONE
null
null
null
## Describe the bug I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'". I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function. Here is the code I use ## Steps to reproduce the bug ```datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, cache_file_name="my_tokenized_file" ) ``` ## Actual results tokenized_datasets = datasets.map( TypeError: map() got an unexpected keyword argument 'cache_file_name' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.6.2 - Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10 - Python version:3.8.5 - PyArrow version:3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2407/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2407/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6140
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6140/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6140/comments
https://api.github.com/repos/huggingface/datasets/issues/6140/events
https://github.com/huggingface/datasets/issues/6140
1,845,384,712
I_kwDODunzps5t_lYI
6,140
Misalignment between file format specified in configs metadata YAML and the inferred builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
"2023-08-10T15:07:34Z"
"2023-08-17T20:37:20Z"
"2023-08-17T20:37:20Z"
MEMBER
null
null
null
There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV): ```yaml configs: - config_name: default data_files: - split: train path: data.csv ``` and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not appear in the configs metadata YAML. See: https://huggingface.co/datasets/freddyaboulton/chatinterface_with_image_csv/discussions/1 CC: @freddyaboulton @polinaeterna
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6140/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6140/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1954/comments
https://api.github.com/repos/huggingface/datasets/issues/1954/events
https://github.com/huggingface/datasets/issues/1954
817,565,563
MDU6SXNzdWU4MTc1NjU1NjM=
1,954
add a new column
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi\r\nnot sure how change the lable after creation, but this is an issue not dataset request. thanks ", "Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188\r\n\r\nIn the future we'll add support for a more native way of adding a new column ;)" ]
"2021-02-26T18:17:27Z"
"2021-04-29T14:50:43Z"
"2021-04-29T14:50:43Z"
NONE
null
null
null
Hi I'd need to add a new column to the dataset, I was wondering how this can be done? thanks @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1954/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3871/comments
https://api.github.com/repos/huggingface/datasets/issues/3871/events
https://github.com/huggingface/datasets/pull/3871
1,163,714,113
PR_kwDODunzps40KRcM
3,871
add pandas to env command
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3871). All of your documentation changes will be reflected on that endpoint.", "Think failures are unrelated - feel free to merge whenever you want :-)" ]
"2022-03-09T09:48:51Z"
"2022-03-09T11:21:38Z"
"2022-03-09T11:21:37Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3871.diff", "html_url": "https://github.com/huggingface/datasets/pull/3871", "merged_at": "2022-03-09T11:21:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/3871.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3871" }
Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3871/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3871/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3354/comments
https://api.github.com/repos/huggingface/datasets/issues/3354/events
https://github.com/huggingface/datasets/pull/3354
1,068,307,271
PR_kwDODunzps4vPl9d
3,354
Remove duplicate name from dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-12-01T11:45:40Z"
"2021-12-01T13:14:30Z"
"2021-12-01T13:14:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3354.diff", "html_url": "https://github.com/huggingface/datasets/pull/3354", "merged_at": "2021-12-01T13:14:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/3354.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3354" }
Remove duplicate name from dataset card for: - ajgt_twitter_ar - emotone_ar
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3354/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3354/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1611/comments
https://api.github.com/repos/huggingface/datasets/issues/1611/events
https://github.com/huggingface/datasets/issues/1611
771,486,456
MDU6SXNzdWU3NzE0ODY0NTY=
1,611
shuffle with torch generator
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehkarimimahabadi", "id": 73364383, "login": "rabeehkarimimahabadi", "node_id": "MDQ6VXNlcjczMzY0Mzgz", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehkarimimahabadi" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Is there a way one can convert the two generator? not sure overall what alternatives I could have to shuffle the datasets with a torch generator, thanks ", "@lhoestq let me please expalin in more details, maybe you could help me suggesting an alternative to solve the issue for now, I have multiple large datasets using huggingface library, then I need to define a distributed sampler on top of it, for this I need to shard the datasets and give each shard to each core, but before sharding I need to shuffle the dataset, if you are familiar with distributed sampler in pytorch, this needs to be done based on seed+epoch generator to make it consistent across the cores they do it through defining a torch generator, I was wondering if you could tell me how I can shuffle the data for now, I am unfortunately blocked by this and have a limited time left, and I greatly appreciate your help on this. thanks ", "@lhoestq Is there a way I could shuffle the datasets from this library with a custom defined shuffle function? thanks for your help on this. ", "Right now the shuffle method only accepts the `seed` (optional int) or `generator` (optional `np.random.Generator`) parameters.\r\n\r\nHere is a suggestion to shuffle the data using your own shuffle method using `select`.\r\n`select` can be used to re-order the dataset samples or simply pick a few ones if you want.\r\nIt's what is used under the hood when you call `dataset.shuffle`.\r\n\r\nTo use `select` you must have the list of re-ordered indices of your samples.\r\n\r\nLet's say you have a `shuffle` methods that you want to use. Then you can first build your shuffled list of indices:\r\n```python\r\nshuffled_indices = shuffle(range(len(dataset)))\r\n```\r\n\r\nThen you can shuffle your dataset using the shuffled indices with \r\n```python\r\nshuffled_dataset = dataset.select(shuffled_indices)\r\n```\r\n\r\nHope that helps", "thank you @lhoestq thank you very much for responding to my question, this greatly helped me and remove the blocking for continuing my work, thanks. ", "@lhoestq could you confirm the method proposed does not bring the whole data into memory? thanks ", "Yes the dataset is not loaded into memory", "great. thanks a lot." ]
"2020-12-20T00:57:14Z"
"2022-06-01T15:30:13Z"
"2022-06-01T15:30:13Z"
NONE
null
null
null
Hi I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I really need to make shuffle work with this generator and I was wondering what I can do about this issue, thanks for your help @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1611/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1611/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/369/comments
https://api.github.com/repos/huggingface/datasets/issues/369/events
https://github.com/huggingface/datasets/issues/369
654,186,890
MDU6SXNzdWU2NTQxODY4OTA=
369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/", "I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 but still getting this error. What could cause this error?" ]
"2020-07-09T16:16:53Z"
"2020-12-15T23:07:22Z"
"2020-07-10T14:52:06Z"
CONTRIBUTOR
null
null
null
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False): File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` I haven't been able to find any reports of this specific pyarrow error here or elsewhere.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/369/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/369/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2448/comments
https://api.github.com/repos/huggingface/datasets/issues/2448/events
https://github.com/huggingface/datasets/pull/2448
912,360,109
MDExOlB1bGxSZXF1ZXN0NjYyNTI2NjA3
2,448
Fix flores download link
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2021-06-05T17:30:24Z"
"2021-06-08T20:02:58Z"
"2021-06-07T08:18:25Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2448.diff", "html_url": "https://github.com/huggingface/datasets/pull/2448", "merged_at": "2021-06-07T08:18:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2448.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2448" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2448/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2448/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3141/comments
https://api.github.com/repos/huggingface/datasets/issues/3141/events
https://github.com/huggingface/datasets/pull/3141
1,033,555,910
PR_kwDODunzps4tjGYz
3,141
Fix caching bugs
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
"2021-10-22T12:59:25Z"
"2021-10-22T20:52:08Z"
"2021-10-22T13:47:05Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3141.diff", "html_url": "https://github.com/huggingface/datasets/pull/3141", "merged_at": "2021-10-22T13:47:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/3141.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3141" }
This PR fixes some caching bugs (most likely introduced in the latest refactor): * remove ")" added by accident in the dataset dir name * correctly pass the namespace kwargs in `CachedDatasetModuleFactory` * improve the warning message if `HF_DATASETS_OFFLINE is `True`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3141/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3141/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/515/comments
https://api.github.com/repos/huggingface/datasets/issues/515/events
https://github.com/huggingface/datasets/pull/515
681,845,619
MDExOlB1bGxSZXF1ZXN0NDcwMTY5MTQ0
515
Fix batched map for formatted dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-08-19T13:34:50Z"
"2020-08-20T20:30:43Z"
"2020-08-20T20:30:42Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/515.diff", "html_url": "https://github.com/huggingface/datasets/pull/515", "merged_at": "2020-08-20T20:30:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/515.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/515" }
If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000). The happened during the creation of the `pa.Table`, since columns had different lengths.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/515/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1462/comments
https://api.github.com/repos/huggingface/datasets/issues/1462/events
https://github.com/huggingface/datasets/pull/1462
761,489,274
MDExOlB1bGxSZXF1ZXN0NTM2MTQ4Njc1
1,462
Added conv ai 2 (Again)
{ "avatar_url": "https://avatars.githubusercontent.com/u/22396042?v=4", "events_url": "https://api.github.com/users/rkc007/events{/privacy}", "followers_url": "https://api.github.com/users/rkc007/followers", "following_url": "https://api.github.com/users/rkc007/following{/other_user}", "gists_url": "https://api.github.com/users/rkc007/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rkc007", "id": 22396042, "login": "rkc007", "node_id": "MDQ6VXNlcjIyMzk2MDQy", "organizations_url": "https://api.github.com/users/rkc007/orgs", "received_events_url": "https://api.github.com/users/rkc007/received_events", "repos_url": "https://api.github.com/users/rkc007/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rkc007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rkc007/subscriptions", "type": "User", "url": "https://api.github.com/users/rkc007" }
[]
closed
false
null
[]
null
[ "Looking perfect to me, need to rerun the tests\r\n", "Thanks, @tanmoyio. \r\nHow do I rerun the tests? Should I change something or push a new commit?", "@rkc007 you don't need to rerun it, @lhoestq @yjernite will rerun it, as there are huge number of PRs in the queue it might take lil bit of time. ", "ive just re-run the tests", "Thank you @abhishekkrthakur. Can you please rerun it again? It seems something was broken in CI during the previous test.", "@lhoestq Sorry for the mess. I don't know why this keeps on happening. I tried step by step process of updating the PR but seems something is wrong. This happened for 2nd time with the same PR. Apologies for that. \r\n\r\nNew PR -> https://github.com/huggingface/datasets/pull/1527\r\nAlso, I fixed everything in the new PR." ]
"2020-12-10T18:21:55Z"
"2020-12-13T00:21:32Z"
"2020-12-13T00:21:31Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1462.diff", "html_url": "https://github.com/huggingface/datasets/pull/1462", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1462.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1462" }
The original PR -> https://github.com/huggingface/datasets/pull/1383 Reason for creating again - The reason I had to create the PR again was due to the master rebasing issue. After rebasing the changes, all the previous commits got added to the branch.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1462/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1462/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3954/comments
https://api.github.com/repos/huggingface/datasets/issues/3954/events
https://github.com/huggingface/datasets/issues/3954
1,172,141,664
I_kwDODunzps5F3XZg
3,954
The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/49593805?v=4", "events_url": "https://api.github.com/users/MatanBenChorin/events{/privacy}", "followers_url": "https://api.github.com/users/MatanBenChorin/followers", "following_url": "https://api.github.com/users/MatanBenChorin/following{/other_user}", "gists_url": "https://api.github.com/users/MatanBenChorin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MatanBenChorin", "id": 49593805, "login": "MatanBenChorin", "node_id": "MDQ6VXNlcjQ5NTkzODA1", "organizations_url": "https://api.github.com/users/MatanBenChorin/orgs", "received_events_url": "https://api.github.com/users/MatanBenChorin/received_events", "repos_url": "https://api.github.com/users/MatanBenChorin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MatanBenChorin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatanBenChorin/subscriptions", "type": "User", "url": "https://api.github.com/users/MatanBenChorin" }
[]
closed
false
null
[]
null
[ "Hi @MatanBenChorin, thanks for reporting.\r\n\r\nPlease, take into account that the preview may take some time until it properly renders (we are working to reduce this time).\r\n\r\nMaybe @severo can give more details on this.", "Hi, \r\nThank you", "Thanks for reporting. We are looking at it and will give updates here.", "I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:\r\n\r\n```\r\nServer Error\r\n\r\nStatus code: 400\r\nException: NameError\r\nMessage: name 'HebrewSquad' is not defined\r\n```", "The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> hf_token = \"hf_...\" # <- required because the dataset is gated\r\n>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)\r\n...\r\nNameError: name 'HebrewSquad' is not defined\r\n```", "Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)\r\n\r\nHere is the fix @MatanBenChorin :\r\n\r\n```diff\r\n- HebrewSquad(\r\n+ HebrewSquadConfig(\r\n```" ]
"2022-03-17T09:38:11Z"
"2022-04-20T12:39:07Z"
"2022-04-20T12:39:07Z"
NONE
null
null
null
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1' **Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true The dataset preview is not available for this dataset. Am I the one who added this dataset ? Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3954/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3954/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2864/comments
https://api.github.com/repos/huggingface/datasets/issues/2864/events
https://github.com/huggingface/datasets/pull/2864
986,159,438
MDExOlB1bGxSZXF1ZXN0NzI1MzkyNjcw
2,864
Fix data URL in ToTTo dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": null, "closed_issues": 2, "created_at": "2021-07-21T15:34:56Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/8", "id": 6968069, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "node_id": "MI_kwDODunzps4AalMF", "number": 8, "open_issues": 4, "state": "open", "title": "1.12", "updated_at": "2021-10-13T10:26:33Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/8" }
[]
"2021-09-02T05:25:08Z"
"2021-09-02T06:47:40Z"
"2021-09-02T06:47:40Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2864.diff", "html_url": "https://github.com/huggingface/datasets/pull/2864", "merged_at": "2021-09-02T06:47:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2864.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2864" }
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2864/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6443/comments
https://api.github.com/repos/huggingface/datasets/issues/6443/events
https://github.com/huggingface/datasets/issues/6443
2,006,568,368
I_kwDODunzps53mc2w
6,443
Trouble loading files defined in YAML explicitly
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "There is a typo in one of the file names - `data/edf.csv` should be renamed to `data/def.csv` 🙂. ", "wow, I reviewed it twice to avoid being ashamed like that, but... I didn't notice the typo.\r\n\r\n---\r\n\r\nBesides this: do you think we would be able to improve the error message to make this clearer?" ]
"2023-11-22T15:18:10Z"
"2023-11-23T09:06:20Z"
null
CONTRIBUTOR
null
null
null
Look at https://huggingface.co/datasets/severo/doc-yaml-2 It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration ``` You can select multiple files per split using a list of paths: my_dataset_repository/ ├── README.md ├── data/ │ ├── abc.csv │ └── def.csv └── holdout/ └── ghi.csv --- configs: - config_name: default data_files: - split: train path: - "data/abc.csv" - "data/def.csv" - split: test path: "holdout/ghi.csv" --- ``` It raises the following error: ``` Error code: ConfigNamesError Exception: FileNotFoundError Message: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6443/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6443/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2001/comments
https://api.github.com/repos/huggingface/datasets/issues/2001/events
https://github.com/huggingface/datasets/issues/2001
823,946,706
MDU6SXNzdWU4MjM5NDY3MDY=
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/donggyukimc", "id": 16605764, "login": "donggyukimc", "node_id": "MDQ6VXNlcjE2NjA1NzY0", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "repos_url": "https://api.github.com/users/donggyukimc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "type": "User", "url": "https://api.github.com/users/donggyukimc" }
[]
closed
false
null
[]
null
[ "Why did you close this issue? How did you end up finding the evidence documents? I'm running into a similar issue with other KILT tasks." ]
"2021-03-07T15:41:35Z"
"2022-12-19T19:25:14Z"
"2021-03-17T05:51:01Z"
NONE
null
null
null
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}` However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance. `{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]} ` should i perform other procedure to obtain evidence documents?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2001/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3166/comments
https://api.github.com/repos/huggingface/datasets/issues/3166/events
https://github.com/huggingface/datasets/pull/3166
1,036,450,283
PR_kwDODunzps4tsVQJ
3,166
Deprecate prepare_module
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "Sounds good, thanks !" ]
"2021-10-26T15:28:24Z"
"2021-11-05T09:27:37Z"
"2021-11-05T09:27:36Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3166.diff", "html_url": "https://github.com/huggingface/datasets/pull/3166", "merged_at": "2021-11-05T09:27:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/3166.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3166" }
In version 1.13, `prepare_module` was deprecated. This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead. Fix #3165.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3166/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3166/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5203/comments
https://api.github.com/repos/huggingface/datasets/issues/5203/events
https://github.com/huggingface/datasets/pull/5203
1,436,710,518
PR_kwDODunzps5CPlnW
5,203
Update canonical links to Hub links
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-11-04T22:50:50Z"
"2022-11-07T18:43:05Z"
"2022-11-07T18:40:19Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5203.diff", "html_url": "https://github.com/huggingface/datasets/pull/5203", "merged_at": "2022-11-07T18:40:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/5203.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5203" }
This PR updates some of the canonical dataset links to their corresponding links on the Hub; closes #5200.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5203/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5203/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6463/comments
https://api.github.com/repos/huggingface/datasets/issues/6463/events
https://github.com/huggingface/datasets/pull/6463
2,020,702,967
PR_kwDODunzps5g46_4
6,463
Disable benchmarks in PRs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "It's a way to detect regressions in performance sensitive methods like map, and find the commit that lead to the regression", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005357 / 0.011353 (-0.005996) | 0.003295 / 0.011008 (-0.007713) | 0.062354 / 0.038508 (0.023846) | 0.054207 / 0.023109 (0.031098) | 0.240030 / 0.275898 (-0.035869) | 0.267863 / 0.323480 (-0.055617) | 0.002925 / 0.007986 (-0.005061) | 0.002634 / 0.004328 (-0.001695) | 0.047952 / 0.004250 (0.043702) | 0.038424 / 0.037052 (0.001372) | 0.248059 / 0.258489 (-0.010430) | 0.271923 / 0.293841 (-0.021918) | 0.027513 / 0.128546 (-0.101034) | 0.010344 / 0.075646 (-0.065302) | 0.210864 / 0.419271 (-0.208407) | 0.035911 / 0.043533 (-0.007622) | 0.245166 / 0.255139 (-0.009973) | 0.260914 / 0.283200 (-0.022285) | 0.016709 / 0.141683 (-0.124974) | 1.098324 / 1.452155 (-0.353830) | 1.162638 / 1.492716 (-0.330079) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094419 / 0.018006 (0.076413) | 0.303209 / 0.000490 (0.302719) | 0.000214 / 0.000200 (0.000014) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018350 / 0.037411 (-0.019061) | 0.060625 / 0.014526 (0.046099) | 0.072545 / 0.176557 (-0.104012) | 0.120905 / 0.737135 (-0.616231) | 0.073858 / 0.296338 (-0.222480) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282011 / 0.215209 (0.066802) | 2.758741 / 2.077655 (0.681086) | 1.431691 / 1.504120 (-0.072429) | 1.315883 / 1.541195 (-0.225312) | 1.344235 / 1.468490 (-0.124255) | 0.562117 / 4.584777 (-4.022660) | 2.385641 / 3.745712 (-1.360071) | 2.785402 / 5.269862 (-2.484460) | 1.753912 / 4.565676 (-2.811764) | 0.064054 / 0.424275 (-0.360221) | 0.005050 / 0.007607 (-0.002557) | 0.336452 / 0.226044 (0.110407) | 3.302481 / 2.268929 (1.033553) | 1.794105 / 55.444624 (-53.650519) | 1.519346 / 6.876477 (-5.357131) | 1.514911 / 2.142072 (-0.627161) | 0.655779 / 4.805227 (-4.149449) | 0.117913 / 6.500664 (-6.382751) | 0.042229 / 0.075469 (-0.033240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.935196 / 1.841788 (-0.906591) | 11.490113 / 8.074308 (3.415805) | 10.542446 / 10.191392 (0.351054) | 0.129614 / 0.680424 (-0.550810) | 0.014919 / 0.534201 (-0.519282) | 0.288448 / 0.579283 (-0.290835) | 0.266929 / 0.434364 (-0.167435) | 0.328830 / 0.540337 (-0.211507) | 0.475510 / 1.386936 (-0.911426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005469 / 0.011353 (-0.005884) | 0.003798 / 0.011008 (-0.007210) | 0.049129 / 0.038508 (0.010621) | 0.055490 / 0.023109 (0.032380) | 0.265828 / 0.275898 (-0.010070) | 0.286031 / 0.323480 (-0.037448) | 0.004075 / 0.007986 (-0.003910) | 0.002668 / 0.004328 (-0.001660) | 0.047823 / 0.004250 (0.043573) | 0.041946 / 0.037052 (0.004894) | 0.270359 / 0.258489 (0.011869) | 0.294287 / 0.293841 (0.000446) | 0.029643 / 0.128546 (-0.098903) | 0.010523 / 0.075646 (-0.065123) | 0.057370 / 0.419271 (-0.361902) | 0.033149 / 0.043533 (-0.010384) | 0.264408 / 0.255139 (0.009269) | 0.280413 / 0.283200 (-0.002787) | 0.018313 / 0.141683 (-0.123370) | 1.105982 / 1.452155 (-0.346173) | 1.182486 / 1.492716 (-0.310230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092643 / 0.018006 (0.074637) | 0.301320 / 0.000490 (0.300831) | 0.000221 / 0.000200 (0.000021) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021253 / 0.037411 (-0.016158) | 0.068052 / 0.014526 (0.053527) | 0.080821 / 0.176557 (-0.095736) | 0.119320 / 0.737135 (-0.617816) | 0.081952 / 0.296338 (-0.214387) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288536 / 0.215209 (0.073327) | 2.819900 / 2.077655 (0.742245) | 1.545210 / 1.504120 (0.041090) | 1.422047 / 1.541195 (-0.119147) | 1.439158 / 1.468490 (-0.029332) | 0.564910 / 4.584777 (-4.019867) | 2.430474 / 3.745712 (-1.315238) | 2.763979 / 5.269862 (-2.505882) | 1.732203 / 4.565676 (-2.833474) | 0.062692 / 0.424275 (-0.361583) | 0.004936 / 0.007607 (-0.002671) | 0.341626 / 0.226044 (0.115582) | 3.366623 / 2.268929 (1.097694) | 1.917198 / 55.444624 (-53.527426) | 1.637635 / 6.876477 (-5.238842) | 1.625953 / 2.142072 (-0.516119) | 0.634936 / 4.805227 (-4.170291) | 0.115336 / 6.500664 (-6.385328) | 0.040946 / 0.075469 (-0.034524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964865 / 1.841788 (-0.876922) | 12.077233 / 8.074308 (4.002925) | 10.664120 / 10.191392 (0.472728) | 0.132084 / 0.680424 (-0.548340) | 0.015931 / 0.534201 (-0.518270) | 0.289181 / 0.579283 (-0.290102) | 0.276943 / 0.434364 (-0.157420) | 0.324884 / 0.540337 (-0.215453) | 0.552570 / 1.386936 (-0.834366) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4ac3f2b3f6d867673e41a0253f9e1ad48db68a8e \"CML watermark\")\n" ]
"2023-12-01T11:35:30Z"
"2023-12-01T12:09:09Z"
"2023-12-01T12:03:04Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6463.diff", "html_url": "https://github.com/huggingface/datasets/pull/6463", "merged_at": "2023-12-01T12:03:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6463" }
In order to keep PR pages less spammy / more readable. Having the benchmarks on commits on `main` is enough imo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6463/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2599/comments
https://api.github.com/repos/huggingface/datasets/issues/2599/events
https://github.com/huggingface/datasets/pull/2599
937,980,229
MDExOlB1bGxSZXF1ZXN0Njg0NDQ2MTYx
2,599
Update processing.rst with other export formats
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
[]
"2021-07-06T14:50:38Z"
"2021-07-12T14:10:16Z"
"2021-07-07T08:05:48Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2599.diff", "html_url": "https://github.com/huggingface/datasets/pull/2599", "merged_at": "2021-07-07T08:05:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2599.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2599" }
Add other supported export formats than CSV in the docs.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2599/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2599/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1819/comments
https://api.github.com/repos/huggingface/datasets/issues/1819/events
https://github.com/huggingface/datasets/pull/1819
801,448,670
MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2
1,819
Fixed spelling `S3Fileystem` to `S3FileSystem`
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
[]
closed
false
null
[]
null
[]
"2021-02-04T16:36:46Z"
"2021-02-04T16:52:27Z"
"2021-02-04T16:52:26Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1819.diff", "html_url": "https://github.com/huggingface/datasets/pull/1819", "merged_at": "2021-02-04T16:52:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1819" }
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1819/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3112/comments
https://api.github.com/repos/huggingface/datasets/issues/3112/events
https://github.com/huggingface/datasets/issues/3112
1,030,613,083
I_kwDODunzps49behb
3,112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
{ "avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4", "events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}", "followers_url": "https://api.github.com/users/BenoitDalFerro/followers", "following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}", "gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BenoitDalFerro", "id": 69694610, "login": "BenoitDalFerro", "node_id": "MDQ6VXNlcjY5Njk0NjEw", "organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs", "received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events", "repos_url": "https://api.github.com/users/BenoitDalFerro/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions", "type": "User", "url": "https://api.github.com/users/BenoitDalFerro" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.", "fixed", "Ok got it, tensor full of NaNs, cf.\r\n\r\n~\\anaconda3\\envs\\xxx\\lib\\site-packages\\datasets\\arrow_writer.py in write_examples_on_file(self)\r\n315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:", "Actually this is is a live bug, documented yet still live so reopening" ]
"2021-10-19T18:21:41Z"
"2021-10-19T18:52:29Z"
null
NONE
null
null
null
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB Note that I always run `batch_size=writer_batch_size` : ## Steps to reproduce the bug ```python datasets.map(lambda example : {"column_name" : function(arguments)}, batched=False, remove_columns = datasets.column_names, batch_size=batch_size, writer_batch_size=batch_size, disable_nullable=True, num_proc=None, desc="blablabla") ``` ## Introspecting CUDA memory during bug Placed within `function(arguments)` the following statement to introspect memory usage, merely a little over 1/4 of 2Gb `print(torch.cuda.memory_summary(device=device, abbreviated=False))` > |===========================================================================| | PyTorch CUDA memory summary, device ID 0 | |---------------------------------------------------------------------------| | CUDA OOMs: 0 | cudaMalloc retries: 0 | |===========================================================================| | Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed | |---------------------------------------------------------------------------| | Allocated memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB | | from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB | | from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB | |---------------------------------------------------------------------------| | Active memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB | | from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB | | from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB | |---------------------------------------------------------------------------| | GPU reserved memory | 598016 KB | 598016 KB | 598016 KB | 0 B | | from large pool | 595968 KB | 595968 KB | 595968 KB | 0 B | | from small pool | 2048 KB | 2048 KB | 2048 KB | 0 B | |---------------------------------------------------------------------------| | Non-releasable memory | 36117 KB | 52292 KB | 274275 KB | 238158 KB | | from large pool | 34816 KB | 51537 KB | 261713 KB | 226897 KB | | from small pool | 1301 KB | 2045 KB | 12562 KB | 11261 KB | |---------------------------------------------------------------------------| | Allocations | 198 | 224 | 478 | 280 | | from large pool | 74 | 75 | 75 | 1 | | from small pool | 124 | 150 | 403 | 279 | |---------------------------------------------------------------------------| | Active allocs | 198 | 224 | 478 | 280 | | from large pool | 74 | 75 | 75 | 1 | | from small pool | 124 | 150 | 403 | 279 | |---------------------------------------------------------------------------| | GPU reserved segments | 21 | 21 | 21 | 0 | | from large pool | 20 | 20 | 20 | 0 | | from small pool | 1 | 1 | 1 | 0 | |---------------------------------------------------------------------------| | Non-releasable allocs | 18 | 23 | 166 | 148 | | from large pool | 17 | 18 | 19 | 2 | | from small pool | 1 | 6 | 147 | 146 | |===========================================================================| ## Expected results Efficiently process the datasets and write it down to disk. ## Actual results -------------------------------------------------------------------------- OverflowError Traceback (most recent call last) ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2390 else: -> 2391 writer.write(example) 2392 else: ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write(self, example, key, writer_batch_size) 367 --> 368 self.write_examples_on_file() 369 ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self) 316 if not isinstance(pa_array[0], pa.lib.FloatScalar): --> 317 raise OverflowError( 318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format( OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB During handling of the above exception, another exception occurred: OverflowError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_16268/2456940807.py in <module> 3 #tracker = OfflineEmissionsTracker(country_iso_code="FRA", project_name='xxx'+time_stamp,output_dir='./codecarbon') 4 #tracker.start() ----> 5 process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection=['wikipedia'], from_scratch=True, 6 clean_sentences=False, negative_sampling=False, translate=False, tokenize=False, generate_embeddings=True, concatenate_embeddings=False, 7 max_sample=10000, padding='do_not_pad', truncation=True, cpu_batch_size=1000, gpu_batch_size=2, cpu_writer_batch_size=1000, gpu_writer_batch_size=2, disable_nullable=True, num_proc=None) # ~\xxx\xxx.py in process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection, from_scratch, clean_sentences, translate, negative_sampling, tokenize, generate_embeddings, concatenate_embeddings, max_sample, padding, truncation, cpu_batch_size, gpu_batch_size, cpu_writer_batch_size, gpu_writer_batch_size, disable_nullable, num_proc) 481 for column in tqdm(dataset.column_names, desc=f'Processing column', leave=False): 482 if "xxx_" in column: --> 483 dataset = dataset.map(lambda example : 484 {"embeddings_"+str(column).replace("translated_",""):function(input_ids=example[column], 485 token_type_ids=example[column.replace("input_ids","token_type_ids")], ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2034 2035 if num_proc is None or num_proc == 1: -> 2036 return self._map_single( 2037 function=function, 2038 with_indices=with_indices, ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs) 501 self: "Dataset" = kwargs.pop("self") 502 # apply actual function --> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 505 for dataset in datasets: ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs) 468 } 469 # apply actual function --> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 472 # re-apply format to the output ~\anaconda3\envs\xxx\lib\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2425 if update_data: 2426 if writer is not None: -> 2427 writer.finalize() 2428 if tmp_file is not None: 2429 tmp_file.close() ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in finalize(self, close_stream) 440 # Re-intializing to empty list for next batch 441 self.hkey_record = [] --> 442 self.write_examples_on_file() 443 if self.pa_writer is None: 444 if self._schema is not None: ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self) 315 # This check fails with FloatArrays with nans, which is not what we want, so account for that: 316 if not isinstance(pa_array[0], pa.lib.FloatScalar): --> 317 raise OverflowError( 318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format( 319 type(pa_array) OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0 ##Next steps Testing on Linux. @albertvillanova
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3112/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3112/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2789/comments
https://api.github.com/repos/huggingface/datasets/issues/2789/events
https://github.com/huggingface/datasets/pull/2789
967,361,934
MDExOlB1bGxSZXF1ZXN0NzA5NTQwMzY5
2,789
Updated dataset description of DaNE
{ "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "events_url": "https://api.github.com/users/KennethEnevoldsen/events{/privacy}", "followers_url": "https://api.github.com/users/KennethEnevoldsen/followers", "following_url": "https://api.github.com/users/KennethEnevoldsen/following{/other_user}", "gists_url": "https://api.github.com/users/KennethEnevoldsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KennethEnevoldsen", "id": 23721977, "login": "KennethEnevoldsen", "node_id": "MDQ6VXNlcjIzNzIxOTc3", "organizations_url": "https://api.github.com/users/KennethEnevoldsen/orgs", "received_events_url": "https://api.github.com/users/KennethEnevoldsen/received_events", "repos_url": "https://api.github.com/users/KennethEnevoldsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KennethEnevoldsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KennethEnevoldsen/subscriptions", "type": "User", "url": "https://api.github.com/users/KennethEnevoldsen" }
[]
closed
false
null
[]
null
[ "Thanks for finishing it @albertvillanova " ]
"2021-08-11T19:58:48Z"
"2021-08-12T16:10:59Z"
"2021-08-12T16:06:01Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2789.diff", "html_url": "https://github.com/huggingface/datasets/pull/2789", "merged_at": "2021-08-12T16:06:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2789.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2789" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2789/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2789/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2787/comments
https://api.github.com/repos/huggingface/datasets/issues/2787/events
https://github.com/huggingface/datasets/issues/2787
967,018,406
MDU6SXNzdWU5NjcwMTg0MDY=
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
{ "avatar_url": "https://avatars.githubusercontent.com/u/39627475?v=4", "events_url": "https://api.github.com/users/jinec/events{/privacy}", "followers_url": "https://api.github.com/users/jinec/followers", "following_url": "https://api.github.com/users/jinec/following{/other_user}", "gists_url": "https://api.github.com/users/jinec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jinec", "id": 39627475, "login": "jinec", "node_id": "MDQ6VXNlcjM5NjI3NDc1", "organizations_url": "https://api.github.com/users/jinec/orgs", "received_events_url": "https://api.github.com/users/jinec/received_events", "repos_url": "https://api.github.com/users/jinec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jinec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinec/subscriptions", "type": "User", "url": "https://api.github.com/users/jinec" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "the bug code locate in :\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)", "Hi @jinec,\r\n\r\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com\r\n\r\nNormally, it should work if you wait a little and then retry.\r\n\r\nCould you please confirm if the problem persists?", "cannot connect,even by Web browser,please check that there is some problems。", "I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...", "> I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...\r\n\r\nI can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China", "Finally i can access it, by the superfast software. Thanks", "> Finally i can access it, by the superfast software. Thanks\r\n\r\nExcuse me, I have the same problem as you, could you please tell me how to solve it?", "It is not related to the area, the ConnectionError with http://raw.githubuserconent.com has persisted with load_data function, datasets module. However, it can be set to either wget or ssl snippet to download dataset from github as following. \r\n\r\n`$ wget https://raw.githubusercontent.com/... --no-check-certificate`\r\n\r\n\r\nor \r\n\r\nfor the tfds, nltk or pandas.read_csv downloading as follows. \r\n\r\n```\r\nimport ssl\r\n\r\ntry:\r\n _create_unverified_https_context = ssl._create_unverified_context\r\nexcept AttributeError:\r\n pass\r\nelse:\r\n ssl._create_default_https_context = _create_unverified_https_context\r\n```\r\n\r\nSo it is most probably the problem of github rather than users \r\n", "> > I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...\r\n> \r\n> I can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China\r\n\r\n所以老哥怎么解决这个问题呢" ]
"2021-08-11T16:19:01Z"
"2023-10-03T12:39:25Z"
"2021-08-18T15:09:18Z"
NONE
null
null
null
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 250, in main datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 718, in load_dataset use_auth_token=use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 320, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 623, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py Trying to do python run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/mrpc/ Is this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago. Thank you!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2787/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2787/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5299/comments
https://api.github.com/repos/huggingface/datasets/issues/5299/events
https://github.com/huggingface/datasets/pull/5299
1,464,695,091
PR_kwDODunzps5Dt3Sk
5,299
Fix xopen for Windows pathnames
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-11-25T15:35:28Z"
"2022-11-29T08:23:58Z"
"2022-11-29T08:21:24Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5299.diff", "html_url": "https://github.com/huggingface/datasets/pull/5299", "merged_at": "2022-11-29T08:21:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/5299.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5299" }
This PR fixes a bug in `xopen` function for Windows pathnames. Fix #5298.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5299/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5299/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3465/comments
https://api.github.com/repos/huggingface/datasets/issues/3465/events
https://github.com/huggingface/datasets/issues/3465
1,085,400,432
I_kwDODunzps5AseVw
3,465
Unable to load 'cnn_dailymail' dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42352729?v=4", "events_url": "https://api.github.com/users/talha1503/events{/privacy}", "followers_url": "https://api.github.com/users/talha1503/followers", "following_url": "https://api.github.com/users/talha1503/following{/other_user}", "gists_url": "https://api.github.com/users/talha1503/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/talha1503", "id": 42352729, "login": "talha1503", "node_id": "MDQ6VXNlcjQyMzUyNzI5", "organizations_url": "https://api.github.com/users/talha1503/orgs", "received_events_url": "https://api.github.com/users/talha1503/received_events", "repos_url": "https://api.github.com/users/talha1503/repos", "site_admin": false, "starred_url": "https://api.github.com/users/talha1503/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/talha1503/subscriptions", "type": "User", "url": "https://api.github.com/users/talha1503" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?", "This looks related to https://github.com/huggingface/datasets/issues/996", "It seems that [this](https://huggingface.co/datasets/ccdv/cnn_dailymail) copy of the dataset has fixed the problem" ]
"2021-12-21T03:32:21Z"
"2022-02-17T14:13:57Z"
"2022-02-17T14:13:57Z"
NONE
null
null
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expected results Expecting to load 'cnn_dailymail' dataset. ## Actual results `NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3465/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1322/comments
https://api.github.com/repos/huggingface/datasets/issues/1322/events
https://github.com/huggingface/datasets/pull/1322
759,576,003
MDExOlB1bGxSZXF1ZXN0NTM0NTU3Njg3
1,322
add indonlu benchmark datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/6518504?v=4", "events_url": "https://api.github.com/users/yasirabd/events{/privacy}", "followers_url": "https://api.github.com/users/yasirabd/followers", "following_url": "https://api.github.com/users/yasirabd/following{/other_user}", "gists_url": "https://api.github.com/users/yasirabd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yasirabd", "id": 6518504, "login": "yasirabd", "node_id": "MDQ6VXNlcjY1MTg1MDQ=", "organizations_url": "https://api.github.com/users/yasirabd/orgs", "received_events_url": "https://api.github.com/users/yasirabd/received_events", "repos_url": "https://api.github.com/users/yasirabd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yasirabd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yasirabd/subscriptions", "type": "User", "url": "https://api.github.com/users/yasirabd" }
[]
closed
false
null
[]
null
[]
"2020-12-08T16:10:58Z"
"2020-12-13T02:11:27Z"
"2020-12-13T01:54:28Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1322.diff", "html_url": "https://github.com/huggingface/datasets/pull/1322", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1322.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1322" }
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1322/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1322/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5251/comments
https://api.github.com/repos/huggingface/datasets/issues/5251/events
https://github.com/huggingface/datasets/issues/5251
1,451,761,321
I_kwDODunzps5WiB6p
5,251
Docs are not generated after latest release
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
null
[]
null
[ "After a discussion with @mishig25:\r\n- He said that this action should be triggered if we call our release branch according to the regex `v*-release`, as transformers does\r\n- I said that our procedure is different: our release branch is *temporary* and it is deleted just after the release PR is merged to main\r\n - Indeed the release tag is not yet created when we make the release PR (not event when this is merged to main), but when we make the Release itself.\r\n\r\nI was thinking that maybe we could change the triggering event: use `release` instead of `push`.\r\n\r\nWhat do you think, @huggingface/datasets?", "Why is it an issue if our branch is temporary ?", "He says not; but the branch has no tag yet; does the doc building require the tag? Or just the version number in `__init__.py` or setup.py?", "It uses `module.__version__` (i.e. the one defined in `__init__.py`) - no need to have a tag\r\n\r\nhttps://github.com/huggingface/doc-builder/blob/81575cf081964c30ea5fd39450f4820db963f18e/src/doc_builder/commands/build.py#L69", "Thanks, @lhoestq.\r\n\r\n@mishig25 has manually forced the generation of the docs, that are live for 2.7.0 version: https://huggingface.co/docs/datasets/v2.7.0/en/index ", "Cool ! this can be closed then ?", "I was waiting for #5250 to be merged to close this.", "just to confirm, is there anything I need to do from my side ? Or is everything good here ?" ]
"2022-11-16T14:59:31Z"
"2022-11-22T16:27:50Z"
"2022-11-22T16:27:50Z"
MEMBER
null
null
null
After the latest `datasets` release version 0.7.0, the docs were not generated. As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action: https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad44b02a85/.github/workflows/build_documentation.yml#L3-L8 Related to: - #5250 CC: @mishig25
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5251/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5251/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2988/comments
https://api.github.com/repos/huggingface/datasets/issues/2988/events
https://github.com/huggingface/datasets/issues/2988
1,011,148,017
I_kwDODunzps48ROTx
2,988
IndexError: Invalid key: 14 is out of bounds for size 0
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! Could you check the length of the `self.dataset` object (i.e. the Dataset object passed to the data loader) ? It looks like the dataset is empty.\r\nNot sure why the SWA optimizer would cause this though.", "Any updates on this? \r\nThe same error occurred to me too when running `cardiffnlp/twitter-roberta-base-sentiment` on a custom dataset. This happened when I tried to do `model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3])` without using sagemaker distribution. \r\nPython: 3.6.13\r\ndatasets: 1.6.2", "Hi @ruisi-su, do you have this issue while using SWA as well, or just data parallel ?\r\n\r\nIf you have a code example to reproduce this issue it would also be helpful", "@lhoestq I had this issue without SWA. I followed [this](https://github.com/huggingface/notebooks/blob/master/sagemaker/03_distributed_training_data_parallelism/sagemaker-notebook.ipynb) notebook to utilize multiple gpus on the [roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) model. This tutorial could only work if I am on `ml.p3.16xlarge`, which I don't have access to. So I tried using just `model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3]` before calling `trainer.fit()`. But maybe this is not the right way to do distributed training. I can provide a code example if that will be more helpful.", "It might be an issue with old versions of `datasets`, can you try updating `datasets` ?", "FYI I encountered the exact same error using the latest versions of `datasets`, `transformers` and `pyarrow`, without using any kind of SWA or dataparallel: \r\n\r\n```\r\n# packages in environment at C:\\Users\\zhang\\mambaforge:\r\n#\r\n# Name Version Build Channel\r\ncudatoolkit 11.0.3 h3f58a73_9 https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge\r\ndatasets 1.17.0 pypi_0 pypi\r\npyarrow 6.0.1 pypi_0 pypi\r\npytorch 1.7.1 py3.9_cuda110_cudnn8_0 pytorch\r\ntornado 6.1 py39hb82d6ee_2 https://mirrors.ustc.edu.cn/anaconda/cloud/conda-forge\r\n```\r\n\r\n```\r\n> python --version\r\n> 3.9.7\r\n```", "Same error here! Datasets version `1.18.3` freshly updated.\r\n\r\n`IndexError: Invalid key: 90 is out of bounds for size 0`\r\n\r\nMy task is finetuning the model for token classification.\r\n\r\n**Solved**: I make a mistake while updating the dataset during the map, you should check that you return the correct values.\r\n", "cc @sgugger This probably comes from the `Trainer` removing all the columns of a dataset, do you think we can improve the error message in this case ?", "The `Trainer` clearly logs when it removes columns in the dataset. I'm not too sure of where the bug appears as I haven't seen a clear reproducer. Happy to display a more helpful error message, but I'd need a reproducer to see what the exact problem is to design the right test and warning :-) ", "Well, if I can try to suggest how to reproduce, please try by do not returning any updated content in the map function used to tokenize input (e.g., in TokenClassification). I can leave here my wrong version for reference:\r\n\r\n```python\r\ndef preprocess_function(examples):\r\n\r\n text = examples[\"text\"]\r\n \r\n inputs = tokenizer(\r\n text,\r\n max_length=512,\r\n truncation=\"only_second\",\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n\r\n offset_mapping = inputs.pop(\"offset_mapping\")\r\n # ... processing code\r\n\r\n inputs[\"labels\"] = label_ids\r\n #return inputs\r\n \r\ntrain_ds = train_ds.map(preprocess_function, batched=False)\r\ntest_ds = test_ds.map(preprocess_function, batched=False)\r\neval_ds = eval_ds.map(preprocess_function, batched=False)\r\n```\r\n\r\nOf course, returning inputs solved the problem. As suggestion, a possible error message could display \"IndexError: the `key` required by trainer are not found in the dataset\" (just an hypothesis, I think there could be something better). \r\n\r\nPlease tell me if you need more details to reproduce, glad to help!", "That's the thing though. The `Trainer` has no idea which inputs are required or not since all models can have different kinds of inputs, and it can work for models outside of the Transformers library. I can add a clear error message if I get an empty batch, as this is easy to detect, but that's pretty much it.", "I think that it could be enough to ease the identification of the problem.", "Done in [this commit](https://github.com/huggingface/transformers/commit/c87cfd653c4de3d4743a9ae09d749282d94d5829)" ]
"2021-09-29T16:04:24Z"
"2022-04-10T14:49:49Z"
"2022-04-10T14:49:49Z"
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is working fine before adding SWA optimizer, the moment I modify the model with `swa_model = AveragedModel(model)` in this script, I am getting the below error, since I am NOT touching the dataloader part, I am confused why this is occurring, I very much appreciate your opinion on this @lhoestq ## Steps to reproduce the bug ``` Traceback (most recent call last): File "run_clm.py", line 723, in <module> main() File "run_clm.py", line 669, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/transformers/trainer.py", line 1258, in train for step, inputs in enumerate(epoch_iterator): File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1530, in __getitem__ format_kwargs=self._format_kwargs, File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1517, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 368, in query_table _check_valid_index_key(key, size) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 311, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 14 is out of bounds for size 0 ``` ## Expected results not getting the index error ## Actual results Please see the above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets 1.12.1 - Platform: linux - Python version: 3.7.11 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2988/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2988/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5458/comments
https://api.github.com/repos/huggingface/datasets/issues/5458/events
https://github.com/huggingface/datasets/issues/5458
1,555,054,737
I_kwDODunzps5csECR
5,458
slice split while streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/122370631?v=4", "events_url": "https://api.github.com/users/SvenDS9/events{/privacy}", "followers_url": "https://api.github.com/users/SvenDS9/followers", "following_url": "https://api.github.com/users/SvenDS9/following{/other_user}", "gists_url": "https://api.github.com/users/SvenDS9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SvenDS9", "id": 122370631, "login": "SvenDS9", "node_id": "U_kgDOB0s6Rw", "organizations_url": "https://api.github.com/users/SvenDS9/orgs", "received_events_url": "https://api.github.com/users/SvenDS9/received_events", "repos_url": "https://api.github.com/users/SvenDS9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SvenDS9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SvenDS9/subscriptions", "type": "User", "url": "https://api.github.com/users/SvenDS9" }
[]
closed
false
null
[]
null
[ "Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train\").take(3)`\r\n\r\n\r\n", "Thank you for your quick response!" ]
"2023-01-24T14:08:17Z"
"2023-01-24T15:11:47Z"
"2023-01-24T15:11:47Z"
NONE
null
null
null
### Describe the bug When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported. Did I miss this in the documentation? ### Steps to reproduce the bug `load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")` causes ValueError: Bad split: train[:3]. Available splits: ['train', 'test'] in builder.py, line 1213, in as_streaming_dataset ### Expected behavior The first 3 entries of the dataset as a stream ### Environment info - `datasets` version: 2.8.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.9 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5458/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5458/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1569/comments
https://api.github.com/repos/huggingface/datasets/issues/1569/events
https://github.com/huggingface/datasets/pull/1569
766,758,895
MDExOlB1bGxSZXF1ZXN0NTM5NjkwMjc2
1,569
added un_ga dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26374564?v=4", "events_url": "https://api.github.com/users/param087/events{/privacy}", "followers_url": "https://api.github.com/users/param087/followers", "following_url": "https://api.github.com/users/param087/following{/other_user}", "gists_url": "https://api.github.com/users/param087/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/param087", "id": 26374564, "login": "param087", "node_id": "MDQ6VXNlcjI2Mzc0NTY0", "organizations_url": "https://api.github.com/users/param087/orgs", "received_events_url": "https://api.github.com/users/param087/received_events", "repos_url": "https://api.github.com/users/param087/repos", "site_admin": false, "starred_url": "https://api.github.com/users/param087/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/param087/subscriptions", "type": "User", "url": "https://api.github.com/users/param087" }
[]
closed
false
null
[]
null
[]
"2020-12-14T17:42:04Z"
"2020-12-15T15:28:58Z"
"2020-12-15T15:28:58Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1569.diff", "html_url": "https://github.com/huggingface/datasets/pull/1569", "merged_at": "2020-12-15T15:28:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1569.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1569" }
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset. With suggested changes in #1330
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1569/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1569/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1481/comments
https://api.github.com/repos/huggingface/datasets/issues/1481/events
https://github.com/huggingface/datasets/pull/1481
762,579,658
MDExOlB1bGxSZXF1ZXN0NTM3MTEwOTM1
1,481
Fix ADD_NEW_DATASET to avoid rebasing once pushed
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2020-12-11T16:27:49Z"
"2021-01-07T10:10:20Z"
"2021-01-07T10:10:20Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1481.diff", "html_url": "https://github.com/huggingface/datasets/pull/1481", "merged_at": "2021-01-07T10:10:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/1481.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1481" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1481/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1481/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/144/comments
https://api.github.com/repos/huggingface/datasets/issues/144/events
https://github.com/huggingface/datasets/pull/144
619,477,367
MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1
144
[AWS tests] AWS test should not run for canonical datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
"2020-05-16T13:39:30Z"
"2020-05-16T13:44:34Z"
"2020-05-16T13:44:33Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/144.diff", "html_url": "https://github.com/huggingface/datasets/pull/144", "merged_at": "2020-05-16T13:44:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/144.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/144" }
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests. 2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS. I think the testing structure might need a bigger refactoring and better documentation very soon. Merging for now to unblock new PRs @thomwolf @mariamabarham .
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/144/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/144/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2563/comments
https://api.github.com/repos/huggingface/datasets/issues/2563/events
https://github.com/huggingface/datasets/issues/2563
932,387,639
MDU6SXNzdWU5MzIzODc2Mzk=
2,563
interleave_datasets for map-style datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
"2021-06-29T08:57:24Z"
"2021-07-01T09:33:33Z"
"2021-07-01T09:33:33Z"
MEMBER
null
null
null
Currently the `interleave_datasets` functions only works for `IterableDataset`. Let's make it work for map-style `Dataset` objects as well. It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2563/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2563/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4383/comments
https://api.github.com/repos/huggingface/datasets/issues/4383/events
https://github.com/huggingface/datasets/issues/4383
1,243,856,981
I_kwDODunzps5KI8BV
4,383
L
{ "avatar_url": "https://avatars.githubusercontent.com/u/99847861?v=4", "events_url": "https://api.github.com/users/AronCodes21/events{/privacy}", "followers_url": "https://api.github.com/users/AronCodes21/followers", "following_url": "https://api.github.com/users/AronCodes21/following{/other_user}", "gists_url": "https://api.github.com/users/AronCodes21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AronCodes21", "id": 99847861, "login": "AronCodes21", "node_id": "U_kgDOBfOOtQ", "organizations_url": "https://api.github.com/users/AronCodes21/orgs", "received_events_url": "https://api.github.com/users/AronCodes21/received_events", "repos_url": "https://api.github.com/users/AronCodes21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AronCodes21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AronCodes21/subscriptions", "type": "User", "url": "https://api.github.com/users/AronCodes21" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
"2022-05-21T03:47:58Z"
"2022-05-21T19:20:13Z"
"2022-05-21T19:20:13Z"
NONE
null
null
null
## Describe the L L ## Expected L A clear and concise lmll Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4383/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4383/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/803/comments
https://api.github.com/repos/huggingface/datasets/issues/803/events
https://github.com/huggingface/datasets/pull/803
736,818,917
MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2
803
fix: typos in tutorial to map KILT and TriviaQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PaulLerner", "id": 25532159, "login": "PaulLerner", "node_id": "MDQ6VXNlcjI1NTMyMTU5", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "repos_url": "https://api.github.com/users/PaulLerner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "type": "User", "url": "https://api.github.com/users/PaulLerner" }
[]
closed
false
null
[]
null
[]
"2020-11-05T10:42:00Z"
"2020-11-10T09:08:07Z"
"2020-11-10T09:08:07Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/803.diff", "html_url": "https://github.com/huggingface/datasets/pull/803", "merged_at": "2020-11-10T09:08:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/803.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/803" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/803/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/999/comments
https://api.github.com/repos/huggingface/datasets/issues/999/events
https://github.com/huggingface/datasets/pull/999
755,246,786
MDExOlB1bGxSZXF1ZXN0NTMwOTk1MTY3
999
add generated_reviews_enth
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
[]
closed
false
null
[]
null
[]
"2020-12-02T12:50:43Z"
"2020-12-03T11:17:28Z"
"2020-12-03T11:17:28Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/999.diff", "html_url": "https://github.com/huggingface/datasets/pull/999", "merged_at": "2020-12-03T11:17:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/999" }
`generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/999/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4301/comments
https://api.github.com/repos/huggingface/datasets/issues/4301/events
https://github.com/huggingface/datasets/pull/4301
1,230,401,256
PR_kwDODunzps43idlE
4,301
Add ImageNet-Sketch dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data.\r\n\r\nI think it's fine to upload the dataset as soon as we mention explicitly that the images may be subject to copyright." ]
"2022-05-09T23:38:45Z"
"2022-05-23T18:14:14Z"
"2022-05-23T18:05:29Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4301.diff", "html_url": "https://github.com/huggingface/datasets/pull/4301", "merged_at": "2022-05-23T18:05:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/4301.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4301" }
This PR adds the ImageNet-Sketch dataset and resolves #3953 .
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4301/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4301/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1246/comments
https://api.github.com/repos/huggingface/datasets/issues/1246/events
https://github.com/huggingface/datasets/pull/1246
758,418,652
MDExOlB1bGxSZXF1ZXN0NTMzNTk0NjIz
1,246
arXiv dataset added
{ "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tanmoyio", "id": 33005287, "login": "tanmoyio", "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "repos_url": "https://api.github.com/users/tanmoyio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "type": "User", "url": "https://api.github.com/users/tanmoyio" }
[]
closed
false
null
[]
null
[]
"2020-12-07T11:20:23Z"
"2020-12-07T14:22:58Z"
"2020-12-07T14:22:58Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1246.diff", "html_url": "https://github.com/huggingface/datasets/pull/1246", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1246.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1246" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1246/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1246/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/553/comments
https://api.github.com/repos/huggingface/datasets/issues/553/events
https://github.com/huggingface/datasets/pull/553
690,143,182
MDExOlB1bGxSZXF1ZXN0NDc3MDgxNTg2
553
[Fix GitHub Actions] test adding tmate
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
"2020-09-01T13:28:03Z"
"2021-05-05T18:24:38Z"
"2020-09-03T09:01:13Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/553.diff", "html_url": "https://github.com/huggingface/datasets/pull/553", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/553.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/553" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/553/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1460/comments
https://api.github.com/repos/huggingface/datasets/issues/1460/events
https://github.com/huggingface/datasets/pull/1460
761,349,149
MDExOlB1bGxSZXF1ZXN0NTM2MDI3NzYy
1,460
add Bengali Hate Speech dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
[ "@lhoestq I think you might want to look at the dataset, and the first data instances mentioned in the README.md is very much offensive. Though this dataset is based on hate speech but I found the dataset heavily disturbing as Bengali is my native language.", "Hi @tanmoyio indeed you're right.\r\nWe should *at least* add very explicit mentions in the dataset card that the content of this dataset contains very offensive language. We should also put it in perspective with the tasks it tries to solve, the annotation process and the limitations.\r\n\r\nWe have to make sure that nothing is unclear/misleading nor could lead to bad usage of the dataset.\r\n\r\nWhat do you think @tanmoyio ?\r\nAlso feel free to suggest modifications in the dataset cards if you feel like some sections require corrections or more details", "> Hi @tanmoyio indeed you're right.\r\n> We should _at least_ add very explicit mentions in the dataset card that the content of this dataset contains very offensive language. We should also put it in perspective with the tasks it tries to solve, the annotation process and the limitations.\r\n> \r\n> We have to make sure that nothing is unclear/misleading nor could lead to bad usage of the dataset.\r\n> \r\n> What do you think @tanmoyio ?\r\n> Also feel free to suggest modifications in the dataset cards if you feel like some sections require corrections or more details\r\n\r\nyeah I agree with you. It would be good if \"Personal and Sensitive Information\" and \"Considerations for Using the Data\" is being explained properly in the README.md. @stevhliu ", "please let me know if there is anything else you'd like to see!", "This looks ok to merge for me. Let me know @stevhliu and @tanmoyio if you want to add something or if it looks good to you", "looks good to me @lhoestq 👍 ", "merging since the CI is fixed on master" ]
"2020-12-10T15:40:55Z"
"2021-09-17T16:54:53Z"
"2021-01-04T14:08:29Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1460.diff", "html_url": "https://github.com/huggingface/datasets/pull/1460", "merged_at": "2021-01-04T14:08:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/1460.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1460" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1460/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1460/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1092/comments
https://api.github.com/repos/huggingface/datasets/issues/1092/events
https://github.com/huggingface/datasets/pull/1092
756,913,134
MDExOlB1bGxSZXF1ZXN0NTMyMzc5MDY0
1,092
Add Coached Conversation Preference Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vineeths96", "id": 50873201, "login": "vineeths96", "node_id": "MDQ6VXNlcjUwODczMjAx", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "repos_url": "https://api.github.com/users/vineeths96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "type": "User", "url": "https://api.github.com/users/vineeths96" }
[]
closed
false
null
[]
null
[]
"2020-12-04T08:36:49Z"
"2020-12-20T13:34:00Z"
"2020-12-04T13:49:50Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1092.diff", "html_url": "https://github.com/huggingface/datasets/pull/1092", "merged_at": "2020-12-04T13:49:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/1092.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1092" }
Adding [Coached Conversation Preference Dataset](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1092/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1092/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5504/comments
https://api.github.com/repos/huggingface/datasets/issues/5504/events
https://github.com/huggingface/datasets/pull/5504
1,570,621,242
PR_kwDODunzps5JPoWy
5,504
don't zero copy timestamps
{ "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dwyatte", "id": 2512762, "login": "dwyatte", "node_id": "MDQ6VXNlcjI1MTI3NjI=", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "repos_url": "https://api.github.com/users/dwyatte/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "type": "User", "url": "https://api.github.com/users/dwyatte" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008606 / 0.011353 (-0.002747) | 0.004659 / 0.011008 (-0.006349) | 0.101311 / 0.038508 (0.062802) | 0.029664 / 0.023109 (0.006555) | 0.321850 / 0.275898 (0.045952) | 0.380497 / 0.323480 (0.057017) | 0.007003 / 0.007986 (-0.000982) | 0.003393 / 0.004328 (-0.000936) | 0.078704 / 0.004250 (0.074453) | 0.035810 / 0.037052 (-0.001242) | 0.327271 / 0.258489 (0.068782) | 0.369302 / 0.293841 (0.075461) | 0.033625 / 0.128546 (-0.094921) | 0.011563 / 0.075646 (-0.064084) | 0.323950 / 0.419271 (-0.095322) | 0.040660 / 0.043533 (-0.002872) | 0.327211 / 0.255139 (0.072072) | 0.350325 / 0.283200 (0.067125) | 0.085427 / 0.141683 (-0.056256) | 1.464370 / 1.452155 (0.012216) | 1.490355 / 1.492716 (-0.002362) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202879 / 0.018006 (0.184873) | 0.419836 / 0.000490 (0.419346) | 0.000303 / 0.000200 (0.000103) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023336 / 0.037411 (-0.014075) | 0.096817 / 0.014526 (0.082291) | 0.103990 / 0.176557 (-0.072567) | 0.137749 / 0.737135 (-0.599386) | 0.108236 / 0.296338 (-0.188102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420801 / 0.215209 (0.205592) | 4.205308 / 2.077655 (2.127653) | 2.050363 / 1.504120 (0.546243) | 1.877390 / 1.541195 (0.336195) | 2.031060 / 1.468490 (0.562570) | 0.687950 / 4.584777 (-3.896827) | 3.363202 / 3.745712 (-0.382510) | 1.869482 / 5.269862 (-3.400379) | 1.159131 / 4.565676 (-3.406545) | 0.082374 / 0.424275 (-0.341901) | 0.012425 / 0.007607 (0.004818) | 0.519775 / 0.226044 (0.293731) | 5.244612 / 2.268929 (2.975684) | 2.371314 / 55.444624 (-53.073311) | 2.052713 / 6.876477 (-4.823764) | 2.190015 / 2.142072 (0.047942) | 0.803806 / 4.805227 (-4.001421) | 0.148110 / 6.500664 (-6.352554) | 0.064174 / 0.075469 (-0.011295) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250424 / 1.841788 (-0.591364) | 13.487870 / 8.074308 (5.413561) | 13.080736 / 10.191392 (2.889344) | 0.147715 / 0.680424 (-0.532709) | 0.028409 / 0.534201 (-0.505792) | 0.397531 / 0.579283 (-0.181752) | 0.399458 / 0.434364 (-0.034905) | 0.461467 / 0.540337 (-0.078871) | 0.541639 / 1.386936 (-0.845297) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006753 / 0.011353 (-0.004600) | 0.004573 / 0.011008 (-0.006435) | 0.076122 / 0.038508 (0.037614) | 0.027529 / 0.023109 (0.004419) | 0.341291 / 0.275898 (0.065393) | 0.376889 / 0.323480 (0.053409) | 0.005032 / 0.007986 (-0.002953) | 0.003447 / 0.004328 (-0.000882) | 0.075186 / 0.004250 (0.070936) | 0.038516 / 0.037052 (0.001463) | 0.340927 / 0.258489 (0.082438) | 0.386626 / 0.293841 (0.092785) | 0.031929 / 0.128546 (-0.096617) | 0.011759 / 0.075646 (-0.063888) | 0.085616 / 0.419271 (-0.333656) | 0.042858 / 0.043533 (-0.000674) | 0.341881 / 0.255139 (0.086742) | 0.367502 / 0.283200 (0.084303) | 0.090788 / 0.141683 (-0.050895) | 1.472871 / 1.452155 (0.020716) | 1.577825 / 1.492716 (0.085109) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233137 / 0.018006 (0.215131) | 0.415016 / 0.000490 (0.414526) | 0.000379 / 0.000200 (0.000179) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024966 / 0.037411 (-0.012445) | 0.102794 / 0.014526 (0.088268) | 0.107543 / 0.176557 (-0.069014) | 0.143133 / 0.737135 (-0.594002) | 0.111494 / 0.296338 (-0.184845) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438354 / 0.215209 (0.223145) | 4.382244 / 2.077655 (2.304589) | 2.056340 / 1.504120 (0.552220) | 1.851524 / 1.541195 (0.310330) | 1.933147 / 1.468490 (0.464657) | 0.701446 / 4.584777 (-3.883331) | 3.396893 / 3.745712 (-0.348819) | 2.837516 / 5.269862 (-2.432346) | 1.538298 / 4.565676 (-3.027379) | 0.083449 / 0.424275 (-0.340826) | 0.012793 / 0.007607 (0.005186) | 0.539661 / 0.226044 (0.313616) | 5.428415 / 2.268929 (3.159487) | 2.527582 / 55.444624 (-52.917042) | 2.172795 / 6.876477 (-4.703682) | 2.220011 / 2.142072 (0.077938) | 0.814338 / 4.805227 (-3.990889) | 0.153468 / 6.500664 (-6.347196) | 0.069056 / 0.075469 (-0.006413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.278434 / 1.841788 (-0.563354) | 14.284924 / 8.074308 (6.210616) | 13.486596 / 10.191392 (3.295203) | 0.138457 / 0.680424 (-0.541967) | 0.016609 / 0.534201 (-0.517592) | 0.382828 / 0.579283 (-0.196455) | 0.387604 / 0.434364 (-0.046760) | 0.478801 / 0.540337 (-0.061536) | 0.565352 / 1.386936 (-0.821584) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c39ba501daab763b9972f44f229c66d900d20bee \"CML watermark\")\n", "> Thanks! I modified the test a bit to make it more consistent with the rest of the \"extractor\" tests.\r\n\r\nAppreciate the assist on the tests! 🚀 " ]
"2023-02-03T23:39:04Z"
"2023-02-08T17:28:50Z"
"2023-02-08T14:33:17Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5504.diff", "html_url": "https://github.com/huggingface/datasets/pull/5504", "merged_at": "2023-02-08T14:33:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/5504.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5504" }
Fixes https://github.com/huggingface/datasets/issues/5495 I'm not sure whether we prefer a test here or if timestamps are known to be unsupported (like booleans). The current test at least covers the bug
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5504/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5504/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/217/comments
https://api.github.com/repos/huggingface/datasets/issues/217/events
https://github.com/huggingface/datasets/issues/217
627,128,403
MDU6SXNzdWU2MjcxMjg0MDM=
217
Multi-task dataset mixing
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
[ "I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **Hypothesis**: The St. Louis Cardinals have always won.\r\n> \r\n> - **Premise**: yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals when they were there were uh a mostly a losing team but \r\n\r\nwas flattened to a single input:\r\n\r\n> mnli hypothesis: The St. Louis Cardinals have always won. premise:\r\n> yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals\r\n> when they were there were uh a mostly a losing team but.\r\n\r\nThis flattening is actually a very simple operation in `nlp` already. You would just need to do the following:\r\n\r\n```python \r\ndef flatten_inputs(example):\r\n return {\"input\": \"mnli hypothesis: \" + example['hypothesis'] + \" premise: \" + example['premise']}\r\n\r\nt5_ready_mnli_ds = mnli_ds.map(flatten_inputs, remove_columns=[<all columns except output>])\r\n```\r\n\r\nSo I guess converting the datasets into the same format can be left to the user for now. \r\nThen the question is how we can merge the datasets. I would probably be in favor of a simple \r\n\r\n```python \r\ndataset.add()\r\n```\r\n\r\nfunction that checks if the dataset is of the same format and if yes merges the two datasets. Finally, how should the sampling be implemented? **Examples-proportional mixing** corresponds to just merging the datasets and shuffling. For the other two sampling approaches we would need some higher-level features, maybe even a `dataset.sample()` function for merged datasets. \r\n\r\nWhat are your thoughts on this @thomwolf @lhoestq @ghomasHudson @enzoampil ?", "I agree that we should leave the flattening of the dataset to the user for now. Especially because although the T5 framing seems obvious, there are slight variations on how the T5 authors do it in comparison to other approaches such as gpt-3 and decaNLP.\r\n\r\nIn terms of sampling, Examples-proportional mixing does seem the simplest to implement so would probably be a good starting point.\r\n\r\nTemperature-scaled mixing would probably most useful, offering flexibility as it can simulate the other 2 methods by setting the temperature parameter. There is a [relevant part of the T5 repo](https://github.com/google-research/text-to-text-transfer-transformer/blob/03c94165a7d52e4f7230e5944a0541d8c5710788/t5/data/utils.py#L889-L1118) which should help with implementation.\r\n\r\nAccording to the T5 authors, equal-mixing performs worst. Among the other two methods, tuning the K value (the artificial dataset size limit) has a large impact.\r\n", "I agree with going with temperature-scaled mixing for its flexibility!\r\n\r\nFor the function that combines the datasets, I also find `dataset.add()` okay while also considering that users may want it to be easy to combine a list of say 10 data sources in one go.\r\n\r\n`dataset.sample()` should also be good. By the looks of it, we're planning to have as main parameters: `temperature`, and `K`.\r\n\r\nOn converting the datasets to the same format, I agree that we can leave these to the users for now. But, I do imagine it'd be an awesome feature for the future to have this automatically handled, based on a chosen *approach* to formatting :smile: \r\n\r\nE.g. T5, GPT-3, decaNLP, original raw formatting, or a contributed way of formatting in text-to-text. ", "This is an interesting discussion indeed and it would be nice to make multi-task easier.\r\n\r\nProbably the best would be to have a new type of dataset especially designed for that in order to easily combine and sample from the multiple datasets.\r\n\r\nThis way we could probably handle the combination of datasets with differing schemas as well (unlike T5).", "@thomwolf Are you suggesting making a wrapper class which can take existing datasets as arguments and do all the required sampling/combining, to present the same interface as a normal dataset?\r\n\r\nThat doesn't seem too complicated to implement.\r\n", "I guess we're looking at the end user writing something like:\r\n``` python\r\nds = nlp.load_dataset('multitask-t5',datasets=[\"squad\",\"cnn_dm\",...], k=1000, t=2.0)\r\n```\r\nUsing the t5 method of combining here (or this could be a function passed in as an arg) \r\n\r\nPassing kwargs to each 'sub-dataset' might become tricky.", "From thinking upon @thomwolf 's suggestion, I've started experimenting:\r\n```python\r\nclass MultitaskDataset(DatasetBuilder):\r\n def __init__(self, *args, **kwargs):\r\n super(MultitaskDataset, self).__init__(*args, **kwargs)\r\n self._datasets = kwargs.get(\"datasets\")\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features({\r\n \"source\": nlp.Value(\"string\"),\r\n \"target\": nlp.Sequence(nlp.Value(\"string\"))\r\n })\r\n )\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self._datasets'''\r\n min_set = None\r\n for dataset in self._datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n....\r\n\r\n# Maybe this?:\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\nmultitask_dataset = nlp.load_dataset(\r\n 'multitask_dataset',\r\n datasets=[squad,cnn_dailymail], \r\n k=1000, \r\n t=2.0\r\n)\r\n\r\n```\r\n\r\nDoes anyone know what methods of `MultitaskDataset` I would need to implement? Maybe `as_dataset` and `download_and_prepare`? Most of these should be just calling the methods of the sub-datasets. \r\n\r\nI'm assuming DatasetBuilder is better than the more specific `GeneratorBasedBuilder`, `BeamBasedBuilder`, etc....\r\n\r\nOne of the other problems is that the dataset size is unknown till you construct it (as you can pick the sub-datasets). Am hoping not to need to make changes to `nlp.load_dataset` just for this class.\r\n\r\nI'd appreciate it if anyone more familiar with nlp's internal workings could tell me if I'm on the right track!", "I think I would probably go for a `MultiDataset` wrapper around a list of `Dataset`.\r\n\r\nI'm not sure we need to give it `k` and `t` parameters at creation, it can maybe be something along the lines of:\r\n```python\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\n\r\nmultitask_dataset = nlp.MultiDataset(squad, cnn_dm)\r\n\r\nbatch = multitask_dataset.sample(10, temperature=2.0, k=1000)\r\n```\r\n\r\nThe first proof-of-concept for multi-task datasets could definitely require that the provided datasets have the same name/type for columns (if needed you easily rename/cast a column prior to instantiating the `MultiDataset`).\r\n\r\nIt's good to think about it for some time though and don't overfit too much on the T5 examples (in particular for the ways/kwargs for sampling among datasets).", "The problem with changing `k` and `t` per sampling is that you'd have to somehow remember which examples you'd already returned while re-weighting the remaining examples based on the new `k` and `t`values. It seems possible but complicated (I can't really see a reason why you'd want to change the weighting of datasets after you constructed the multidataset).\r\n\r\nWouldn't it be convenient if it implemented the dataset interface? Then if someone has code using a single nlp dataset, they can replace it with a multitask combination of more datasets without having to change other code. We would at least need to be able to pass it into a `DataLoader`.\r\n\r\n", "A very janky (but working) implementation of `multitask_dataset.sample()` could be something like this:\r\n```python\r\nimport nlp\r\nimport torch\r\n\r\nclass MultiDataset():\r\n def __init__(self, *args, temperature=2.0, k=1000, maximum=None, scale=1):\r\n self.datasets = args\r\n self._dataloaders = {}\r\n for split in self._get_common_splits():\r\n split_datasets = [ds[split] for ds in self.datasets]\r\n mixing_rates = self._calc_mixing_rates(split_datasets,temperature, k, maximum, scale)\r\n weights = []\r\n for i in range(len(self.datasets)):\r\n weights += [mixing_rates[i]]*len(self.datasets[i][split])\r\n self._dataloaders[split] = torch.utils.data.DataLoader(torch.utils.data.ConcatDataset(split_datasets),\r\n sampler=torch.utils.data.sampler.WeightedRandomSampler(\r\n num_samples=len(weights),\r\n weights = weights,\r\n replacement=True),\r\n shuffle=False)\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in self.datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n\r\n def _calc_mixing_rates(self,datasets, temperature=2.0, k=1000, maximum=None, scale=1):\r\n '''Work out the weighting of each dataset based on t and k'''\r\n mixing_rates = []\r\n for dataset in datasets:\r\n rate = len(dataset)\r\n rate *= scale\r\n if maximum:\r\n rate = min(rate, maximum)\r\n if temperature != 1.0:\r\n rate = rate ** (1.0/temperature)\r\n mixing_rates.append(rate)\r\n return mixing_rates\r\n\r\n def sample(self,n,split):\r\n batch = []\r\n for example in self._dataloaders[split]:\r\n batch.append(example)\r\n n -= 1\r\n if n == 0:\r\n return batch\r\n\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\nmultitask_dataset = MultiDataset(squad, cnn_dm)\r\nbatch = multitask_dataset.sample(100,\"train\")\r\n```\r\n\r\nThere's definitely a more sensible way than embedding `DataLoader`s inside. ", "There is an interesting related investigation by @zphang here https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb", "Good spot! Here are my thoughts:\r\n\r\n- Aside: Adding `MultitaskModel` to transformers might be a thing to raise - even though having task-specific heads has become unfashionable in recent times in favour of text-to-text type models.\r\n- Adding the task name as an extra field also seems useful for these kind of models which have task-specific heads\r\n- There is some validation of our approach that the user should be expected to `map` datasets into a common form.\r\n- The size-proportional sampling (also called \"Examples-proportional mixing\") used here doesn't perform too badly in the T5 paper (it's comparable to temperature-scaled mixing in many cases but less flexible. This is only reasonable with a `K` maximum size parameter to prevent very large datasets dominating). This might be good for a first prototype using:\r\n ```python\r\n def __iter__(self):\r\n \"\"\"\r\n For each batch, sample a task, and yield a batch from the respective\r\n task Dataloader.\r\n\r\n We use size-proportional sampling, but you could easily modify this\r\n to sample from some-other distribution.\r\n \"\"\"\r\n task_choice_list = []\r\n for i, task_name in enumerate(self.task_name_list):\r\n task_choice_list += [i] * self.num_batches_dict[task_name]\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n dataloader_iter_dict = {\r\n task_name: iter(dataloader) \r\n for task_name, dataloader in self.dataloader_dict.items()\r\n }\r\n for task_choice in task_choice_list:\r\n task_name = self.task_name_list[task_choice]\r\n yield next(dataloader_iter_dict[task_name]) \r\n ```\r\n We'd just need to pull samples from the raw datasets and not from `DataLoader`s for each task. We can assume the user has done `dataset.shuffle()` if they want to.\r\n\r\n Other sampling methods can later be implemented by changing how the `task_choice_list` is generated. This should allow more flexibility and not tie us to specific methods for sampling among datasets.\r\n", "Another thought: Multitasking over benchmarks (represented as Meta-datasets in nlp) is probably a common use case. Would be nice to pass an entire benchmark to our `MultiDataset` wrapper rather than having to pass individual components.", "Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n\r\n- I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n- I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n- I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n- I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n- This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\nclass MultiDataset:\r\n def __init__(self,tasks):\r\n self.tasks = tasks\r\n\r\n # Create random order of tasks\r\n # Using size-proportional sampling\r\n task_choice_list = []\r\n for i, task in enumerate(self.tasks):\r\n task_choice_list += [i] * len(task)\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n # Add index into each dataset\r\n # - We don't want to shuffle within each task\r\n counters = {}\r\n self.task_choice_list = []\r\n for i in range(len(task_choice_list)):\r\n idx = counters.get(task_choice_list[i],0)\r\n self.task_choice_list.append((task_choice_list[i],idx))\r\n counters[task_choice_list[i]] = idx + 1\r\n\r\n\r\n def __len__(self):\r\n return np.sum([len(t) for t in self.tasks])\r\n\r\n def __repr__(self):\r\n task_str = \", \".join([str(t) for t in self.tasks])\r\n return f\"MultiDataset(tasks: {task_str})\"\r\n\r\n def __getitem__(self,key):\r\n if isinstance(key, int):\r\n task_idx, example_idx = self.task_choice_list[key]\r\n task = self.tasks[task_idx]\r\n example = task[example_idx]\r\n example[\"task_name\"] = task.info.builder_name\r\n return example\r\n elif isinstance(key, slice):\r\n raise NotImplementedError()\r\n\r\n def __iter__(self):\r\n for i in range(len(self)):\r\n yield self[i]\r\n\r\n\r\ndef load_multitask(*datasets):\r\n '''Create multitask datasets per split'''\r\n\r\n def _get_common_splits(datasets):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n common_splits = _get_common_splits(datasets)\r\n out = {}\r\n for split in common_splits:\r\n out[split] = MultiDataset([d[split] for d in datasets])\r\n return out\r\n\r\n\r\n##########################################\r\n# Dataset Flattening\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n \"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\n#############################################\r\n\r\nmtds = load_multitask(squad,cnn_dm)\r\n\r\nfor example in mtds[\"train\"]:\r\n print(example[\"task_name\"],example[\"target\"])\r\n```\r\nLet me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.", "Hey! Happy to jump into the discussion here. I'm still getting familiar with bits of this code, but the reasons I sampled over data loaders rather than datasets is 1) ensuring that each sampled batch corresponds to only 1 task (in case of different inputs formats/downstream models) and 2) potentially having different batch sizes per task (e.g. some tasks have very long/short inputs). How are you currently dealing with these in your PR?", "The short answer is - I'm not! Everything is currently on a per-example basis. It would be fairly simple to add a `batch_size` argument which would ensure that every `batch_size` examples come from the same task. That should suit most use-cases (unless you wanted to ensure batches all came from the same task and apply something like `SortishSampler` on each task first)\r\n\r\nYour notebook was really inspiring by the way - thanks!", "@zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.", "mt-dnn's [batcher.py](https://github.com/namisan/mt-dnn/blob/master/mt_dnn/batcher.py) might be worth looking at.", "> @zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.\r\n\r\nI think having different batch sizes per task is particularly helpful in some scenarios where each task has different amount of data. For example, the problem I'm currently facing is one task has tens of thousands of samples while one task has a couple hundreds. I think in this case different batch size could help. But if using the same batch size is a lot simpler to implement, I guess it makes sense to go with that.", "I think that instead of proportional to size sampling you should specify weights or probabilities for drawing a batch from each dataset. We should also ensure that the smaller datasets are repeated so that the encoder layer doesn't overtrain on the largest dataset.", "Are there any references for people doing different batch sizes per task in the literature? I've only seen constant batch sizes with differing numbers of batches for each task which seems sufficient to prevent the impact of large datasets (Read 3.5.3 of the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) for example).\r\n\r\n", "Hi,\r\nregarding building T5 dataset , I think we can use datasets https://github.com/huggingface/datasets and then need something similar to tf.data.experimental.sample_from_datasets, do you know if similar functionality exist in pytorch? Which can sample multiple datasets with the given rates. thanks. ", "Is this feature part of a `datasets` release yet? ", "> Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n> \r\n> * I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n> * I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n> * I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n> * I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n> * This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n> \r\n> ```python\r\n> import nlp\r\n> import numpy as np\r\n> \r\n> class MultiDataset:\r\n> def __init__(self,tasks):\r\n> self.tasks = tasks\r\n> \r\n> # Create random order of tasks\r\n> # Using size-proportional sampling\r\n> task_choice_list = []\r\n> for i, task in enumerate(self.tasks):\r\n> task_choice_list += [i] * len(task)\r\n> task_choice_list = np.array(task_choice_list)\r\n> np.random.shuffle(task_choice_list)\r\n> \r\n> # Add index into each dataset\r\n> # - We don't want to shuffle within each task\r\n> counters = {}\r\n> self.task_choice_list = []\r\n> for i in range(len(task_choice_list)):\r\n> idx = counters.get(task_choice_list[i],0)\r\n> self.task_choice_list.append((task_choice_list[i],idx))\r\n> counters[task_choice_list[i]] = idx + 1\r\n> \r\n> \r\n> def __len__(self):\r\n> return np.sum([len(t) for t in self.tasks])\r\n> \r\n> def __repr__(self):\r\n> task_str = \", \".join([str(t) for t in self.tasks])\r\n> return f\"MultiDataset(tasks: {task_str})\"\r\n> \r\n> def __getitem__(self,key):\r\n> if isinstance(key, int):\r\n> task_idx, example_idx = self.task_choice_list[key]\r\n> task = self.tasks[task_idx]\r\n> example = task[example_idx]\r\n> example[\"task_name\"] = task.info.builder_name\r\n> return example\r\n> elif isinstance(key, slice):\r\n> raise NotImplementedError()\r\n> \r\n> def __iter__(self):\r\n> for i in range(len(self)):\r\n> yield self[i]\r\n> \r\n> \r\n> def load_multitask(*datasets):\r\n> '''Create multitask datasets per split'''\r\n> \r\n> def _get_common_splits(datasets):\r\n> '''Finds the common splits present in all self.datasets'''\r\n> min_set = None\r\n> for dataset in datasets:\r\n> if min_set != None:\r\n> min_set.intersection(set(dataset.keys()))\r\n> else:\r\n> min_set = set(dataset.keys())\r\n> return min_set\r\n> \r\n> common_splits = _get_common_splits(datasets)\r\n> out = {}\r\n> for split in common_splits:\r\n> out[split] = MultiDataset([d[split] for d in datasets])\r\n> return out\r\n> \r\n> \r\n> ##########################################\r\n> # Dataset Flattening\r\n> \r\n> def flatten(dataset,flatten_fn):\r\n> for k in dataset.keys():\r\n> if isinstance(dataset[k],nlp.Dataset):\r\n> dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n> \r\n> # Squad\r\n> def flatten_squad(example):\r\n> return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n> \"target\":example[\"answers\"][\"text\"]}\r\n> squad = nlp.load_dataset(\"squad\")\r\n> flatten(squad,flatten_squad)\r\n> \r\n> # CNN_DM\r\n> def flatten_cnn_dm(example):\r\n> return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\n> cnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\n> flatten(cnn_dm,flatten_cnn_dm)\r\n> \r\n> #############################################\r\n> \r\n> mtds = load_multitask(squad,cnn_dm)\r\n> \r\n> for example in mtds[\"train\"]:\r\n> print(example[\"task_name\"],example[\"target\"])\r\n> ```\r\n> \r\n> Let me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.\r\n\r\nNot sure if this is what I'm looking for, but I implemented a version of Examples-Proportional mixing supporting only the basic feature [here](https://stackoverflow.com/a/74070116/10732321), seems to work in my project. ", "You can use `interleave_datasets` to mix several datasets together. By default it alternates between all the datasets, but you can also provide sampling probabilities if you want to oversample from one of the datasets\r\n\r\n```python\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\ncnn_dm = load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\nds = interleave_datasets([squad, cnn_dm])\r\n\r\nprint(ds[0])\r\n# {'id': '5733be284776f41900661182',\r\n# 'title': 'University_of_Notre_Dame',\r\n# 'context': 'Architecturally, the school has a Catholic character...',\r\n# 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',\r\n# 'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]},\r\n# 'article': None,\r\n# 'highlights': None}\r\nprint(ds[1])\r\n# {'id': '42c027e4ff9730fbb3de84c1af0d2c506e41c3e4',\r\n# 'title': None,\r\n# 'context': None,\r\n# 'question': None,\r\n# 'answers': None,\r\n# 'article': 'LONDON, England (Reuters) -- Harry Potter star Daniel Radcliffe...',\r\n# 'highlights': \"Harry Potter star Daniel Radcliffe...\"}\r\n```\r\n\r\nsee docs at https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.interleave_datasets", "I also have this implementation of multi-task sampler here which I used it to tune T5: https://github.com/rabeehk/hyperformer/blob/main/hyperformer/data/multitask_sampler.py " ]
"2020-05-29T09:22:26Z"
"2022-10-22T00:45:50Z"
null
CONTRIBUTOR
null
null
null
It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks). The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning: - **Examples-proportional mixing** - sample from tasks proportionally to their dataset size - **Equal mixing** - sample uniformly from each task - **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T. Following this discussion https://github.com/huggingface/transformers/issues/4340 in [transformers](https://github.com/huggingface/transformers), @enzoampil suggested that the `nlp` library might be a better place for this functionality. Some method for combining datasets could be implemented ,e.g. ``` dataset = nlp.load_multitask(['squad','imdb','cnn_dm'], temperature=2.0, ...) ``` We would need a few additions: - Method of identifying the tasks - how can we support adding a string to each task as an identifier: e.g. 'summarisation: '? - Method of combining the metrics - a standard approach is to use the specific metric for each task and add them together for a combined score. It would be great to support common use cases such as pretraining on the GLUE benchmark before fine-tuning on each GLUE task in turn. I'm willing to write bits/most of this I just need some guidance on the interface and other library details so I can integrate it properly.
{ "+1": 12, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 12, "url": "https://api.github.com/repos/huggingface/datasets/issues/217/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/217/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2323/comments
https://api.github.com/repos/huggingface/datasets/issues/2323/events
https://github.com/huggingface/datasets/issues/2323
876,438,507
MDU6SXNzdWU4NzY0Mzg1MDc=
2,323
load_dataset("timit_asr") gives back duplicates of just one sample text
{ "avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4", "events_url": "https://api.github.com/users/ekeleshian/events{/privacy}", "followers_url": "https://api.github.com/users/ekeleshian/followers", "following_url": "https://api.github.com/users/ekeleshian/following{/other_user}", "gists_url": "https://api.github.com/users/ekeleshian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ekeleshian", "id": 33647474, "login": "ekeleshian", "node_id": "MDQ6VXNlcjMzNjQ3NDc0", "organizations_url": "https://api.github.com/users/ekeleshian/orgs", "received_events_url": "https://api.github.com/users/ekeleshian/received_events", "repos_url": "https://api.github.com/users/ekeleshian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ekeleshian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekeleshian/subscriptions", "type": "User", "url": "https://api.github.com/users/ekeleshian" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Upgrading datasets to version 1.6 fixes the issue", "This bug was fixed in #1995. Upgrading the `datasets` should work! ", "Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists." ]
"2021-05-05T13:14:48Z"
"2021-05-07T10:32:30Z"
"2021-05-07T10:32:30Z"
NONE
null
null
null
## Describe the bug When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times. I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue. ## Steps to reproduce the bug ```python from datasets import load_dataset timit = load_dataset("timit_asr") print(timit['train']['text']) print(timit['test']['text']) ``` ## Expected Result Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) <img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png"> ## Actual results Rows of repeated text. <img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png"> ## Versions - Datasets: 1.3.0 - Python: 3.9.1 - Platform: macOS-11.2.1-x86_64-i386-64bit}
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2323/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4463/comments
https://api.github.com/repos/huggingface/datasets/issues/4463/events
https://github.com/huggingface/datasets/pull/4463
1,265,093,211
PR_kwDODunzps45Vnzu
4,463
Use config_id to check split sizes instead of config name
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "closing in favor of https://github.com/huggingface/datasets/pull/4465" ]
"2022-06-08T17:45:24Z"
"2023-09-24T10:03:00Z"
"2022-06-09T08:06:37Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4463.diff", "html_url": "https://github.com/huggingface/datasets/pull/4463", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4463" }
Fix https://github.com/huggingface/datasets/issues/4462
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4463/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2237/comments
https://api.github.com/repos/huggingface/datasets/issues/2237/events
https://github.com/huggingface/datasets/issues/2237
861,427,439
MDU6SXNzdWU4NjE0Mjc0Mzk=
2,237
Update Dataset.dataset_size after transformed with map
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks!" ]
"2021-04-19T15:19:38Z"
"2021-04-20T14:22:05Z"
null
MEMBER
null
null
null
After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2237/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2237/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2912/comments
https://api.github.com/repos/huggingface/datasets/issues/2912/events
https://github.com/huggingface/datasets/pull/2912
996,256,005
PR_kwDODunzps4rvhgp
2,912
Update link to Blog in docs footer
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2021-09-14T17:23:14Z"
"2021-09-15T07:59:23Z"
"2021-09-15T07:59:23Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2912.diff", "html_url": "https://github.com/huggingface/datasets/pull/2912", "merged_at": "2021-09-15T07:59:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2912.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2912" }
Update link.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2912/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2912/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6151/comments
https://api.github.com/repos/huggingface/datasets/issues/6151/events
https://github.com/huggingface/datasets/issues/6151
1,851,497,818
I_kwDODunzps5uW51a
6,151
Faster sorting for single key items
{ "avatar_url": "https://avatars.githubusercontent.com/u/47942453?v=4", "events_url": "https://api.github.com/users/jackapbutler/events{/privacy}", "followers_url": "https://api.github.com/users/jackapbutler/followers", "following_url": "https://api.github.com/users/jackapbutler/following{/other_user}", "gists_url": "https://api.github.com/users/jackapbutler/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jackapbutler", "id": 47942453, "login": "jackapbutler", "node_id": "MDQ6VXNlcjQ3OTQyNDUz", "organizations_url": "https://api.github.com/users/jackapbutler/orgs", "received_events_url": "https://api.github.com/users/jackapbutler/received_events", "repos_url": "https://api.github.com/users/jackapbutler/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jackapbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jackapbutler/subscriptions", "type": "User", "url": "https://api.github.com/users/jackapbutler" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "`Dataset.sort` essentially does the same thing except it uses `pyarrow.compute.sort_indices` which doesn't involve copying the data into python objects (saving memory)\r\n\r\n```python\r\nsort_keys = [(col, \"ascending\") for col in column_names]\r\nindices = pc.sort_indices(self.data, sort_keys=sort_keys)\r\nreturn self.select(indices)\r\n```", "Ok interesting, I'll continue debugging to see what is going wrong on my end." ]
"2023-08-15T14:02:31Z"
"2023-08-21T14:38:26Z"
"2023-08-21T14:38:25Z"
NONE
null
null
null
### Feature request A faster way to sort a dataset which contains a large number of rows. ### Motivation The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps. **Code snippet:** ```python ds = datasets.load_dataset( "json", **{"data_files": {"train": "path-to-jsonlines"}, "split": "train"}, num_proc=os.cpu_count(), keep_in_memory=True) sorted_ds = ds.sort("pubDate", keep_in_memory=True) ``` However, once I switched to a different method which 1. unpacked to a list of tuples 2. sorted tuples by key 3. run `.select` with the sorted list of indices It was significantly faster (orders of magnitude, especially with M's of rows) ### Your contribution I'd be happy to implement a crude single key sorting algorithm so that other users can benefit from this trick. Broadly, this would take a `Dataset` and perform; ```python # ds is a Dataset object # key_name is the sorting key class Dataset: ... def _sort(key_name: str) -> Dataset: index_keys = [(i,x) for i,x in enumerate(self[key_name])] sorted_rows = sorted(row_pubdate, key=lambda x: x[1]) sorted_indicies = [x[0] for x in sorted_rows] return self.select(sorted_indicies) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6151/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6151/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5988/comments
https://api.github.com/repos/huggingface/datasets/issues/5988/events
https://github.com/huggingface/datasets/issues/5988
1,773,257,828
I_kwDODunzps5pscRk
5,988
ConnectionError: Couldn't reach dataset_infos.json
{ "avatar_url": "https://avatars.githubusercontent.com/u/20674868?v=4", "events_url": "https://api.github.com/users/yulingao/events{/privacy}", "followers_url": "https://api.github.com/users/yulingao/followers", "following_url": "https://api.github.com/users/yulingao/following{/other_user}", "gists_url": "https://api.github.com/users/yulingao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yulingao", "id": 20674868, "login": "yulingao", "node_id": "MDQ6VXNlcjIwNjc0ODY4", "organizations_url": "https://api.github.com/users/yulingao/orgs", "received_events_url": "https://api.github.com/users/yulingao/received_events", "repos_url": "https://api.github.com/users/yulingao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yulingao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yulingao/subscriptions", "type": "User", "url": "https://api.github.com/users/yulingao" }
[]
closed
false
null
[]
null
[ "Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provide more info about your network (region, proxies, etc.)?" ]
"2023-06-25T12:39:31Z"
"2023-07-07T13:20:57Z"
"2023-07-07T13:20:57Z"
NONE
null
null
null
### Describe the bug I'm trying to load codeparrot/codeparrot-clean-train, but get the following error: ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')))) ### Steps to reproduce the bug train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train') ### Expected behavior download the dataset ### Environment info centos7
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5988/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5988/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5736/comments
https://api.github.com/repos/huggingface/datasets/issues/5736/events
https://github.com/huggingface/datasets/issues/5736
1,662,286,061
I_kwDODunzps5jFHjt
5,736
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
{ "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcasero", "id": 1219084, "login": "rcasero", "node_id": "MDQ6VXNlcjEyMTkwODQ=", "organizations_url": "https://api.github.com/users/rcasero/orgs", "received_events_url": "https://api.github.com/users/rcasero/received_events", "repos_url": "https://api.github.com/users/rcasero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "type": "User", "url": "https://api.github.com/users/rcasero" }
[]
open
false
null
[]
null
[ "Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?", "I have the same error with `datasets==2.14.5` and `pyarrow==13.0.0`. Python 3.10.13", "I have same error. Any workaround?" ]
"2023-04-11T11:29:15Z"
"2023-11-30T07:16:58Z"
null
NONE
null
null
null
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.py` to generate and load an offline dataset. 2. Load it with ```python ds = datasets.load_dataset(path=/path/to/my_dataset.py, name='toy', data_dir=/path/to/my_dataset.py, cache_dir=cache_dir, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, ) ``` It loads fine ``` Dataset my_dataset downloaded and prepared to /path/to/cache/toy-..e05e/1.0.0/...5b4c. Subsequent calls will reuse this data. ``` 3. Try to load it again with the same snippet and the splits are generated, but at the end of the loading process it raises the error ``` 2023-04-11 12:10:19,965: DEBUG: open file: /path/to/cache/toy-..e05e/1.0.0/...5b4c.incomplete/dataset_info.json Traceback (most recent call last): File "<string>", line 2, in <module> File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 852, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/path/to/conda/environment/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 826, in incomplete_dir shutil.rmtree(dirname) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 730, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 728, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/path/to/cache/toy-..e05e/1.0.0/...5b4c' ``` ### Expected behavior Regenerate the dataset from scratch and reload it. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.2
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5736/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5579/comments
https://api.github.com/repos/huggingface/datasets/issues/5579/events
https://github.com/huggingface/datasets/pull/5579
1,599,732,211
PR_kwDODunzps5Kwgo4
5,579
Add instructions to create `DataLoader` from augmented dataset in object detection guide
{ "avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4", "events_url": "https://api.github.com/users/Laurent2916/events{/privacy}", "followers_url": "https://api.github.com/users/Laurent2916/followers", "following_url": "https://api.github.com/users/Laurent2916/following{/other_user}", "gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Laurent2916", "id": 21087104, "login": "Laurent2916", "node_id": "MDQ6VXNlcjIxMDg3MTA0", "organizations_url": "https://api.github.com/users/Laurent2916/orgs", "received_events_url": "https://api.github.com/users/Laurent2916/received_events", "repos_url": "https://api.github.com/users/Laurent2916/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions", "type": "User", "url": "https://api.github.com/users/Laurent2916" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5579). All of your documentation changes will be reflected on that endpoint.", "I'm not sure we need this part as we provide a link to the notebook that shows how to train an object detection model, and this notebook instantiates a `DataLoader` before training the model. I'd like to hear what @stevhliu thinks.\r\n\r\nPS: Your `collate_fn` calls `torch.stack` on the `bbox` tensors, which don't have the same shape, so this will fail.", "I agree with @mariosasko; we also have a [Use with PyTorch](https://huggingface.co/docs/datasets/use_with_pytorch) guide that shows how you can create a `DataLoader`. " ]
"2023-02-25T14:53:17Z"
"2023-03-23T19:24:59Z"
"2023-03-23T19:24:50Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5579.diff", "html_url": "https://github.com/huggingface/datasets/pull/5579", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5579.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5579" }
The following adds instructions on how to create a `DataLoader` from the guide on how to use object detection with augmentations (#4710). I am open to hearing any suggestions for improvement !
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5579/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5579/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5176/comments
https://api.github.com/repos/huggingface/datasets/issues/5176/events
https://github.com/huggingface/datasets/issues/5176
1,430,214,539
I_kwDODunzps5VP1eL
5,176
prepare dataset for cloud storage doesn't work
{ "avatar_url": "https://avatars.githubusercontent.com/u/27285078?v=4", "events_url": "https://api.github.com/users/araonblake/events{/privacy}", "followers_url": "https://api.github.com/users/araonblake/followers", "following_url": "https://api.github.com/users/araonblake/following{/other_user}", "gists_url": "https://api.github.com/users/araonblake/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/araonblake", "id": 27285078, "login": "araonblake", "node_id": "MDQ6VXNlcjI3Mjg1MDc4", "organizations_url": "https://api.github.com/users/araonblake/orgs", "received_events_url": "https://api.github.com/users/araonblake/received_events", "repos_url": "https://api.github.com/users/araonblake/repos", "site_admin": false, "starred_url": "https://api.github.com/users/araonblake/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/araonblake/subscriptions", "type": "User", "url": "https://api.github.com/users/araonblake" }
[]
closed
false
null
[]
null
[ "It looks like an issue with `gcsfs`, are you able to instantiate a `GCSFileSystem` manually ?", "closing since it was probably due to gcsfs" ]
"2022-10-31T17:28:57Z"
"2023-03-28T09:11:46Z"
"2023-03-28T09:11:45Z"
NONE
null
null
null
### Describe the bug Following the [documentation](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) and [this PR](https://github.com/huggingface/datasets/pull/4724), I was downloading and storing huggingface dataset to cloud storage. ``` from datasets import load_dataset, load_dataset_builder dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH') dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet") ``` The above code successfully downloaded dataset, however, it returns error from `download_and_prepare`. > Traceback (most recent call last): > File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module> > dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet") > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare > fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths > cls = get_filesystem_class(protocol) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class > register_implementation(protocol, _import_class(bit["class"])) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 257, in _import_class > mod = importlib.import_module(mod) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/importlib/__init__.py", line 127, in import_module > return _bootstrap._gcd_import(name[level:], package, level) > File "<frozen importlib._bootstrap>", line 1030, in _gcd_import > File "<frozen importlib._bootstrap>", line 1007, in _find_and_load > File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked > File "<frozen importlib._bootstrap>", line 680, in _load_unlocked > File "<frozen importlib._bootstrap_external>", line 850, in exec_module > File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed > File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module> > dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet") > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare > fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths > cls = get_filesystem_class(protocol) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class > register_implementation(protocol, _import_class(bit["class"])) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 258, in _import_class > return getattr(mod, name) > AttributeError: partially initialized module 'gcsfs' has no attribute 'GCSFileSystem' (most likely due to a circular import) ### Steps to reproduce the bug 1. pip install datasets==2.6.1 gcsfs==2022.8.2 2. Run the following code will reproduce the issue (change `LOCAL_PATH` and `Bucket_NAME` accordingly) ``` from datasets import load_dataset, load_dataset_builder dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH') dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet") ``` ### Expected behavior Expecting successful downloading dataset and uploading it to cloud storage. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.5.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5176/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5176/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2206/comments
https://api.github.com/repos/huggingface/datasets/issues/2206/events
https://github.com/huggingface/datasets/issues/2206
855,252,415
MDU6SXNzdWU4NTUyNTI0MTU=
2,206
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
{ "avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4", "events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}", "followers_url": "https://api.github.com/users/yana-xuyan/followers", "following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}", "gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yana-xuyan", "id": 38536635, "login": "yana-xuyan", "node_id": "MDQ6VXNlcjM4NTM2NjM1", "organizations_url": "https://api.github.com/users/yana-xuyan/orgs", "received_events_url": "https://api.github.com/users/yana-xuyan/received_events", "repos_url": "https://api.github.com/users/yana-xuyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions", "type": "User", "url": "https://api.github.com/users/yana-xuyan" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.\r\nCan you please provide a minimal reproducible example for more help?", "Hi @yana-xuyan, thanks for reporting.\r\n\r\nAs clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. As your message error states, one of the values of this field is `50259`, and therefore it cannot be stored as an `int8`.\r\n\r\nMaybe we could implement a way to disable this optimization and allow using any integer value; although the size of the cache files would be much larger.", "I'm facing same issue @mariosasko @albertvillanova \r\n\r\n```\r\nArrowInvalid: Integer value 50260 not in range: -128 to 127\r\n```\r\n\r\nTo reproduce:\r\n```python\r\nSPECIAL_TOKENS = ['<bos>','<eos>','<speaker1>','<speaker2>','<pad>']\r\nATTR_TO_SPECIAL_TOKEN = {\r\n 'bos_token': '<bos>', \r\n 'eos_token': '<eos>', \r\n 'pad_token': '<pad>',\r\n 'additional_special_tokens': ['<speaker1>', '<speaker2>']\r\n }\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\", use_fast=False)\r\nnum_added_tokens =tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN)\r\nvocab_size = len(self.tokenizer.encoder) + num_added_tokens\r\nvocab =tokenizer.get_vocab()\r\n\r\npad_index = tokenizer.pad_token_id\r\neos_index = tokenizer.eos_token_id\r\nbos_index = tokenizer.bos_token_id\r\nspeaker1_index = vocab[\"<speaker1>\"]\r\nspeaker2_index = vocab[\"<speaker2>\"]\r\n```\r\n\r\n```python\r\ntokenizer.decode(['50260'])\r\n'<speaker1>'\r\n```", "@mariosasko \r\nI am hitting this bug in the Bert tokenizer too. I see that @albertvillanova labeled this as a bug back in April. Has there been a fix released yet?\r\nWhat I did for now is to just disable the optimization in the HF library. @yana-xuyan and @thomas-happify, is that what you did and did that work for you?\r\n\r\n", "Hi @gregg-ADP, \r\n\r\nThis is still a bug.\r\n\r\nAs @albertvillanova has suggested, maybe it's indeed worth adding a variable to `config.py` to have a way to disable this behavior.\r\n\r\nIn the meantime, this forced optimization can be disabled by specifying `features` (of the returned examples) in the `map` call:\r\n```python\r\nfrom datasets import *\r\n... # dataset init\r\nds.map(process_example, features=Features({\"special_tokens_mask\": Sequence(Value(\"int32\")), ... rest of the features}) \r\n```\r\n\r\ncc @lhoestq so he is also aware of this issue", "Thanks for the quick reply @mariosasko. What I did was to changed the optimizer to use int32 instead of int8. \r\nWhat you're suggesting specifies the type for each feature explicitly without changing the HF code. This is definitely a better option. However, we are hitting a new error later:\r\n```\r\n File \"/Users/ccccc/PycharmProjects/aaaa-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'pos'\r\n\r\n```\r\nWhere 'pos' is the name of a new feature we added. Do you agree that your way of fixing the optimizer issue will not fix our new issue? If not, I will continue with this optimizer fix until we resolve our other issue.\r\n", "Hi @gwc4github,\r\n\r\nthe fix was merged a few minutes ago, and it doesn't require any changes on the user side (e.g. no need for specifying `features`). If you find time, feel free to install `datasets` from master with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand let us know if it works for your use case! " ]
"2021-04-11T08:40:09Z"
"2021-11-10T12:18:30Z"
"2021-11-10T12:04:28Z"
NONE
null
null
null
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below: Traceback (most recent call last): File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single writer.write(example) File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write self.write_on_file() File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__ out = out.cast(pa.list_(self.optimized_int_type)) File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127 Do you have any idea about it?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2206/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3569/comments
https://api.github.com/repos/huggingface/datasets/issues/3569/events
https://github.com/huggingface/datasets/pull/3569
1,100,478,994
PR_kwDODunzps4w3XGo
3,569
Add the DKTC dataset (Extension of #3564)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sooftware", "id": 42150335, "login": "sooftware", "node_id": "MDQ6VXNlcjQyMTUwMzM1", "organizations_url": "https://api.github.com/users/sooftware/orgs", "received_events_url": "https://api.github.com/users/sooftware/received_events", "repos_url": "https://api.github.com/users/sooftware/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "type": "User", "url": "https://api.github.com/users/sooftware" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "I reflect your comment! @lhoestq ", "Wait, the format of the data just changed, so I'll take it into consideration and commit it.", "I update the code according to the dataset structure change.", "Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).", "> Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).\r\n\r\nHi! @lhoestq There is a problem. \r\n<img src=\"https://user-images.githubusercontent.com/42150335/149804142-3800e635-f5a0-44d9-9694-0c2b0c05f16b.png\" width=500>\r\n \r\nAs shown in the picture above, the conversation is divided into \"\\n\" in the \"conversion\" column. \r\nThat's why there's an error in the file path that only saved only five lines like below. \r\n\r\n```\r\n'idx', 'class', 'conversation'\r\n'0', '협박 대화', '\"지금 너 스스로를 죽여달라고 애원하는 것인가?'\r\n아닙니다. 죄송합니다.'\r\n죽을 거면 혼자 죽지 우리까지 사건에 휘말리게 해? 진짜 죽여버리고 싶게.'\r\n정말 잘못했습니다.\r\n```\r\n \r\nIn fact, these five lines are all one line. \r\n \r\n\r\n", "Hi ! I see, in this case ca you make sure that the dummy data has a full sample ?\r\n\r\nFeel free to open the dummy train.csv in the dummy_data.zip file and add the missing lines", "Sorry, I'm late to check! I'll send it to you soon!", "Thanks for your contribution, @sooftware. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there, under this organization namespace: https://huggingface.co/tunib\r\n\r\nPlease, feel free to tell us if you need some help.", "Close this PR. Thanks!" ]
"2022-01-12T15:31:29Z"
"2022-10-01T06:43:05Z"
"2022-10-01T06:43:04Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3569.diff", "html_url": "https://github.com/huggingface/datasets/pull/3569", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3569.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3569" }
New pull request of #3564. (for DKTC)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3569/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3569/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4980/comments
https://api.github.com/repos/huggingface/datasets/issues/4980/events
https://github.com/huggingface/datasets/issues/4980
1,374,868,083
I_kwDODunzps5R8tJz
4,980
Make `pyarrow` optional
{ "avatar_url": "https://avatars.githubusercontent.com/u/240344?v=4", "events_url": "https://api.github.com/users/KOLANICH/events{/privacy}", "followers_url": "https://api.github.com/users/KOLANICH/followers", "following_url": "https://api.github.com/users/KOLANICH/following{/other_user}", "gists_url": "https://api.github.com/users/KOLANICH/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KOLANICH", "id": 240344, "login": "KOLANICH", "node_id": "MDQ6VXNlcjI0MDM0NA==", "organizations_url": "https://api.github.com/users/KOLANICH/orgs", "received_events_url": "https://api.github.com/users/KOLANICH/received_events", "repos_url": "https://api.github.com/users/KOLANICH/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KOLANICH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KOLANICH/subscriptions", "type": "User", "url": "https://api.github.com/users/KOLANICH" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite / a different library with minimal functionality (datasets-lite ?)", "Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ", "Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n" ]
"2022-09-15T17:38:03Z"
"2022-09-16T17:23:47Z"
"2022-09-16T17:23:47Z"
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** Is `pyarrow` really needed for every dataset? **Describe the solution you'd like** It is made optional. **Describe alternatives you've considered** Likely, no.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4980/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4980/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4533/comments
https://api.github.com/repos/huggingface/datasets/issues/4533/events
https://github.com/huggingface/datasets/issues/4533
1,277,211,490
I_kwDODunzps5MILNi
4,533
Timestamp not returned as datetime objects in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
"2022-06-20T17:28:47Z"
"2022-06-22T16:29:09Z"
"2022-06-22T16:29:09Z"
MEMBER
null
null
null
As reported in (internal) https://github.com/huggingface/datasets-server/issues/397 ```python >>> from datasets import load_dataset >>> dataset = load_dataset("ett", name="h2", split="test", streaming=True) >>> d = next(iter(dataset)) >>> d['start'] Timestamp('2016-07-01 00:00:00') ``` while loading in non-streaming mode it returns `datetime.datetime(2016, 7, 1, 0, 0)`
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4533/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1019/comments
https://api.github.com/repos/huggingface/datasets/issues/1019/events
https://github.com/huggingface/datasets/pull/1019
755,582,090
MDExOlB1bGxSZXF1ZXN0NTMxMjY2NzAz
1,019
Add caWaC dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
"2020-12-02T20:18:55Z"
"2020-12-03T14:47:09Z"
"2020-12-03T14:47:09Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1019.diff", "html_url": "https://github.com/huggingface/datasets/pull/1019", "merged_at": "2020-12-03T14:47:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/1019.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1019" }
Add dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1019/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1019/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6305/comments
https://api.github.com/repos/huggingface/datasets/issues/6305/events
https://github.com/huggingface/datasets/issues/6305
1,946,010,912
I_kwDODunzps5z_cUg
6,305
Cannot load dataset with `2.14.5`: `FileNotFound` error
{ "avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4", "events_url": "https://api.github.com/users/finiteautomata/events{/privacy}", "followers_url": "https://api.github.com/users/finiteautomata/followers", "following_url": "https://api.github.com/users/finiteautomata/following{/other_user}", "gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/finiteautomata", "id": 167943, "login": "finiteautomata", "node_id": "MDQ6VXNlcjE2Nzk0Mw==", "organizations_url": "https://api.github.com/users/finiteautomata/orgs", "received_events_url": "https://api.github.com/users/finiteautomata/received_events", "repos_url": "https://api.github.com/users/finiteautomata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions", "type": "User", "url": "https://api.github.com/users/finiteautomata" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting, @finiteautomata.\r\n\r\nWe are investigating it. ", "There is a bug in `datasets`. You can see our proposed fix:\r\n- #6309 " ]
"2023-10-16T20:11:27Z"
"2023-10-18T13:50:36Z"
"2023-10-18T13:50:36Z"
NONE
null
null
null
### Describe the bug I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`. ### Steps to reproduce the bug [Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg) ```python Downloading readme: 100% 1.19k/1.19k [00:00<00:00, 30.9kB/s] --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-2-807c3583d297>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 load_dataset("piuba-bigdata/articles_and_comments", split="train") 2 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2127 2128 # Create a dataset builder -> 2129 builder_instance = load_dataset_builder( 2130 path=path, 2131 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1813 download_config = download_config.copy() if download_config else DownloadConfig() 1814 download_config.storage_options.update(storage_options) -> 1815 dataset_module = dataset_module_factory( 1816 path, 1817 revision=revision, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1506 raise e1 from None 1507 if isinstance(e1, FileNotFoundError): -> 1508 raise FileNotFoundError( 1509 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1510 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" FileNotFoundError: Couldn't find a dataset script at /content/piuba-bigdata/articles_and_comments/articles_and_comments.py or any data file in the same directory. Couldn't find 'piuba-bigdata/articles_and_comments' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in piuba-bigdata/articles_and_comments. ``` ### Expected behavior It should load normally. ### Environment info ``` - `datasets` version: 2.14.5 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.18.0 - PyArrow version: 9.0.0 - Pandas version: 1.5.3 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6305/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6305/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4303/comments
https://api.github.com/repos/huggingface/datasets/issues/4303/events
https://github.com/huggingface/datasets/pull/4303
1,230,867,728
PR_kwDODunzps43j8cH
4,303
Fix: Add missing comma
{ "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mrm8488", "id": 3653789, "login": "mrm8488", "node_id": "MDQ6VXNlcjM2NTM3ODk=", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "repos_url": "https://api.github.com/users/mrm8488/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "type": "User", "url": "https://api.github.com/users/mrm8488" }
[]
closed
false
null
[]
null
[ "The CI failure is unrelated to this PR and fixed on master, merging :)" ]
"2022-05-10T09:21:38Z"
"2022-05-11T08:50:15Z"
"2022-05-11T08:50:14Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4303.diff", "html_url": "https://github.com/huggingface/datasets/pull/4303", "merged_at": "2022-05-11T08:50:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/4303.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4303" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4303/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4303/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/714/comments
https://api.github.com/repos/huggingface/datasets/issues/714/events
https://github.com/huggingface/datasets/pull/714
714,487,881
MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx
714
Add the official dependabot implementation
{ "avatar_url": "https://avatars.githubusercontent.com/u/12804673?v=4", "events_url": "https://api.github.com/users/ALazyMeme/events{/privacy}", "followers_url": "https://api.github.com/users/ALazyMeme/followers", "following_url": "https://api.github.com/users/ALazyMeme/following{/other_user}", "gists_url": "https://api.github.com/users/ALazyMeme/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ALazyMeme", "id": 12804673, "login": "ALazyMeme", "node_id": "MDQ6VXNlcjEyODA0Njcz", "organizations_url": "https://api.github.com/users/ALazyMeme/orgs", "received_events_url": "https://api.github.com/users/ALazyMeme/received_events", "repos_url": "https://api.github.com/users/ALazyMeme/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ALazyMeme/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ALazyMeme/subscriptions", "type": "User", "url": "https://api.github.com/users/ALazyMeme" }
[]
closed
false
null
[]
null
[]
"2020-10-05T03:49:45Z"
"2020-10-12T11:49:21Z"
"2020-10-12T11:49:21Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/714.diff", "html_url": "https://github.com/huggingface/datasets/pull/714", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/714" }
This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/714/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/714/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5169/comments
https://api.github.com/repos/huggingface/datasets/issues/5169/events
https://github.com/huggingface/datasets/pull/5169
1,425,075,254
PR_kwDODunzps5Bow1Q
5,169
Add "ipykernel" to list of `co_filename`s to remove
{ "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gpucce", "id": 32967787, "login": "gpucce", "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "organizations_url": "https://api.github.com/users/gpucce/orgs", "received_events_url": "https://api.github.com/users/gpucce/received_events", "repos_url": "https://api.github.com/users/gpucce/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "type": "User", "url": "https://api.github.com/users/gpucce" }
[]
closed
false
null
[]
null
[ "I don't know how I could add some tests for this, although jupyter is not among the dependencies so at least that would need to be added. If someone can tell a recommended way I will try to do it!", "So testing by myself and looking around the jupyter codebase it looks like the `co_filename` of objects created within jupyter is of the form `f\"{tempfile.tempdir}/ipykernel_{id1}/{id2}.py\"` however I can't find the exact command setting it so I [asked in discourse](https://discourse.jupyter.org/t/co-filename-within-notebooks/16538). For now adapted the `co_filename` filter and added tests according to this I hope to get an answer and possibly fix based on that.", "Ok ! I think it's fine to just check if the parent folder is named like `ipykernel_*` then\r\n\r\nsee the source code of how it's created:\r\n\r\nhttps://github.com/ipython/ipykernel/blob/7f73ff705510b35d1e2faad7f5a676c620ce08d4/ipykernel/compiler.py#L72-L75", "Should look better now didn't notice the duplicated tests", "_The documentation is not available anymore as the PR was closed or merged._", "Should work now on windows too", "I did the changes you suggested and tried to rebase, the first part went fine, the second less so :( \r\n\r\nIf you have time to spare, can you tell me what should I do now to fix this? thanks", "Instead of rebasing you can just merge `main` into your branch, otherwise the GitHub preview of your PR shows changes of from `main`.\r\n\r\nFeel free to close this PR and create a new one. Or alternatively your can force push to this PR with a new clean git history.", "I have force-pushed and merged main, only shows the right changes, if you can run CI one more time it should be ok now", "Hi, sorry I have been busy, the thing is I can't really understand why the test fail, besides the ugly thing I had done in the last commit to check if within CI smth stange happened with `os`, locally tests pass", "The CI wasn't passing when using the latest version `dill==0.3.6`. We have a separate function to dump CodeType objects for 0.3.6\r\n\r\nI applied the same changes you did to this other function as well - it should be all good now", "> The CI wasn't passing when using the latest version `dill==0.3.6`. We have a separate function to dump CodeType objects for 0.3.6\r\n> \r\n> I applied the same changes you did to this other function as well - it should be all good now\r\n\r\nThanks, it would have taken a long time to figure out :)" ]
"2022-10-27T05:56:17Z"
"2022-11-02T15:46:00Z"
"2022-11-02T15:43:20Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5169.diff", "html_url": "https://github.com/huggingface/datasets/pull/5169", "merged_at": "2022-11-02T15:43:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/5169.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5169" }
Should resolve #5157
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5169/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5169/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/779/comments
https://api.github.com/repos/huggingface/datasets/issues/779/events
https://github.com/huggingface/datasets/pull/779
732,514,887
MDExOlB1bGxSZXF1ZXN0NTEyNDQzMjY0
779
Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales
{ "avatar_url": "https://avatars.githubusercontent.com/u/11327413?v=4", "events_url": "https://api.github.com/users/rathoreanirudh/events{/privacy}", "followers_url": "https://api.github.com/users/rathoreanirudh/followers", "following_url": "https://api.github.com/users/rathoreanirudh/following{/other_user}", "gists_url": "https://api.github.com/users/rathoreanirudh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rathoreanirudh", "id": 11327413, "login": "rathoreanirudh", "node_id": "MDQ6VXNlcjExMzI3NDEz", "organizations_url": "https://api.github.com/users/rathoreanirudh/orgs", "received_events_url": "https://api.github.com/users/rathoreanirudh/received_events", "repos_url": "https://api.github.com/users/rathoreanirudh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rathoreanirudh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rathoreanirudh/subscriptions", "type": "User", "url": "https://api.github.com/users/rathoreanirudh" }
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
closed
false
null
[]
null
[ "Hi ! This looks interesting, thanks for adding it :) \r\n\r\nFor metrics there should only be two features fields: references and predictions.\r\nBoth of them can be defined as you want using nested structures if you need to.\r\nAlso I'm not sure what goes into references and what goes into predictions, could you give more details please ?\r\nAll the other computations parameters (model etc.) are fine though. Maybe explain a bit more what they're used for", "> Hi ! This looks interesting, thanks for adding it :)\r\n> \r\n> For metrics there should only be two features fields: references and predictions.\r\n> Both of them can be defined as you want using nested structures if you need to.\r\n> Also I'm not sure what goes into references and what goes into predictions, could you give more details please ?\r\n> All the other computations parameters (model etc.) are fine though. Maybe explain a bit more what they're used for\r\n\r\nThe `predictions` are the predicted labels by a model for a particular input. Do you mean making `prob_y_hat` - the probability of the prediction being the predicted label, `prob_y_hat_alpha` - the probability of the prediction being the predicted label when the input is reduced subject to alpha and the `null_difference` is the difference between the probability of the prediction being the predicted label in full information minus the probability in zero information a part of references? Also, I have added the description for other parameters in kwargs_description. I can expand it if that makes sense?", "I think every value that is generated by the model (so label, prob_y_hat, prob_y_hat_alpha etc.) should be in `predictions`.\r\nFeel free to add more details in the kwargs_description, this is very useful for the end user.", "Hi @lhoestq , I have updated the code according to your feedback. Please, let me know if it looks good and can be merged now.", "Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate" ]
"2020-10-29T17:31:14Z"
"2023-07-11T09:36:30Z"
"2023-07-11T09:36:30Z"
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/779.diff", "html_url": "https://github.com/huggingface/datasets/pull/779", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/779" }
This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/779/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/779/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2446/comments
https://api.github.com/repos/huggingface/datasets/issues/2446/events
https://github.com/huggingface/datasets/issues/2446
911,635,399
MDU6SXNzdWU5MTE2MzUzOTk=
2,446
`yelp_polarity` is broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JetRunner", "id": 22514219, "login": "JetRunner", "node_id": "MDQ6VXNlcjIyNTE0MjE5", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "repos_url": "https://api.github.com/users/JetRunner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "type": "User", "url": "https://api.github.com/users/JetRunner" }
[]
closed
false
null
[]
null
[ "```\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py\", line 332, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"/home/sasha/nlp-viewer/run.py\", line 233, in <module>\r\n configs = get_confs(option)\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py\", line 604, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py\", line 588, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"/home/sasha/nlp-viewer/run.py\", line 148, in get_confs\r\n builder_cls = nlp.load.import_main_class(module_path[0], dataset=True)\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py\", line 85, in import_main_class\r\n module = importlib.import_module(module_path)\r\nFile \"/usr/lib/python3.7/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\nFile \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\nFile \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\nFile \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\nFile \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\nFile \"/home/sasha/.cache/huggingface/modules/datasets_modules/datasets/yelp_polarity/a770787b2526bdcbfc29ac2d9beb8e820fbc15a03afd3ebc4fb9d8529de57544/yelp_polarity.py\", line 36, in <module>\r\n from datasets.tasks import TextClassification\r\n```", "Solved by updating the `nlpviewer`" ]
"2021-06-04T15:44:29Z"
"2021-06-04T18:56:47Z"
"2021-06-04T18:56:47Z"
CONTRIBUTOR
null
null
null
![image](https://user-images.githubusercontent.com/22514219/120828150-c4a35b00-c58e-11eb-8083-a537cee4dbb3.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2446/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2446/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4957/comments
https://api.github.com/repos/huggingface/datasets/issues/4957/events
https://github.com/huggingface/datasets/pull/4957
1,366,532,849
PR_kwDODunzps4-nGIk
4,957
Add `Dataset.from_generator`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "I restarted the builder PR job just in case", "_The documentation is not available anymore as the PR was closed or merged._", "CI is now green. https://github.com/huggingface/doc-builder/pull/296 explains why it failed." ]
"2022-09-08T15:08:25Z"
"2022-09-16T14:46:35Z"
"2022-09-16T14:44:18Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4957.diff", "html_url": "https://github.com/huggingface/datasets/pull/4957", "merged_at": "2022-09-16T14:44:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/4957.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4957" }
Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism. Closes https://github.com/huggingface/datasets/issues/4417
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4957/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4957/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/942/comments
https://api.github.com/repos/huggingface/datasets/issues/942/events
https://github.com/huggingface/datasets/issues/942
754,162,318
MDU6SXNzdWU3NTQxNjIzMTg=
942
D
{ "avatar_url": "https://avatars.githubusercontent.com/u/74238514?v=4", "events_url": "https://api.github.com/users/CryptoMiKKi/events{/privacy}", "followers_url": "https://api.github.com/users/CryptoMiKKi/followers", "following_url": "https://api.github.com/users/CryptoMiKKi/following{/other_user}", "gists_url": "https://api.github.com/users/CryptoMiKKi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CryptoMiKKi", "id": 74238514, "login": "CryptoMiKKi", "node_id": "MDQ6VXNlcjc0MjM4NTE0", "organizations_url": "https://api.github.com/users/CryptoMiKKi/orgs", "received_events_url": "https://api.github.com/users/CryptoMiKKi/received_events", "repos_url": "https://api.github.com/users/CryptoMiKKi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CryptoMiKKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CryptoMiKKi/subscriptions", "type": "User", "url": "https://api.github.com/users/CryptoMiKKi" }
[]
closed
false
null
[]
null
[]
"2020-12-01T08:17:10Z"
"2020-12-03T16:42:53Z"
"2020-12-03T16:42:53Z"
NONE
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/942/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/942/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5102/comments
https://api.github.com/repos/huggingface/datasets/issues/5102/events
https://github.com/huggingface/datasets/issues/5102
1,404,746,554
I_kwDODunzps5Turs6
5,102
Error in create a dataset from a Python generator
{ "avatar_url": "https://avatars.githubusercontent.com/u/9004682?v=4", "events_url": "https://api.github.com/users/yangxuhui/events{/privacy}", "followers_url": "https://api.github.com/users/yangxuhui/followers", "following_url": "https://api.github.com/users/yangxuhui/following{/other_user}", "gists_url": "https://api.github.com/users/yangxuhui/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yangxuhui", "id": 9004682, "login": "yangxuhui", "node_id": "MDQ6VXNlcjkwMDQ2ODI=", "organizations_url": "https://api.github.com/users/yangxuhui/orgs", "received_events_url": "https://api.github.com/users/yangxuhui/received_events", "repos_url": "https://api.github.com/users/yangxuhui/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yangxuhui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangxuhui/subscriptions", "type": "User", "url": "https://api.github.com/users/yangxuhui" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "DF8D62", "default": false, "description": "", "id": 4614514401, "name": "hacktoberfest", "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/riccardobucco", "id": 9295277, "login": "riccardobucco", "node_id": "MDQ6VXNlcjkyOTUyNzc=", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "repos_url": "https://api.github.com/users/riccardobucco/repos", "site_admin": false, "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "type": "User", "url": "https://api.github.com/users/riccardobucco" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/riccardobucco", "id": 9295277, "login": "riccardobucco", "node_id": "MDQ6VXNlcjkyOTUyNzc=", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "repos_url": "https://api.github.com/users/riccardobucco/repos", "site_admin": false, "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "type": "User", "url": "https://api.github.com/users/riccardobucco" } ]
null
[ "Hi, thanks for reporting! The last line should be `dataset = Dataset.from_generator(my_gen)`.", "Can I work on this one?" ]
"2022-10-11T14:28:58Z"
"2022-10-12T11:31:56Z"
"2022-10-12T11:31:56Z"
NONE
null
null
null
## Describe the bug In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in. ```Python >>> from datasets import Dataset >>> def my_gen(): ... for i in range(1, 4): ... yield {"a": i} >>> dataset = Dataset.from_generator(my_dict) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5102/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5102/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/387/comments
https://api.github.com/repos/huggingface/datasets/issues/387/events
https://github.com/huggingface/datasets/issues/387
656,361,357
MDU6SXNzdWU2NTYzNjEzNTc=
387
Conversion through to_pandas output numpy arrays for lists instead of python objects
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe we can have to_pydict/to_pylist as the default and use to_numpy or to_pandas when the format (set by `set_format`) is 'numpy' or 'pandas'" ]
"2020-07-14T06:24:01Z"
"2020-07-17T11:37:00Z"
"2020-07-17T11:37:00Z"
MEMBER
null
null
null
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]} >>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0]) <class 'numpy.ndarray'> >>> dataset._data.slice(key, 1).to_pydict() {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/387/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/387/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3742/comments
https://api.github.com/repos/huggingface/datasets/issues/3742/events
https://github.com/huggingface/datasets/pull/3742
1,141,174,549
PR_kwDODunzps4y-1v5
3,742
Fix ValueError message formatting in int2str
{ "avatar_url": "https://avatars.githubusercontent.com/u/41182803?v=4", "events_url": "https://api.github.com/users/akulchik/events{/privacy}", "followers_url": "https://api.github.com/users/akulchik/followers", "following_url": "https://api.github.com/users/akulchik/following{/other_user}", "gists_url": "https://api.github.com/users/akulchik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akulchik", "id": 41182803, "login": "akulchik", "node_id": "MDQ6VXNlcjQxMTgyODAz", "organizations_url": "https://api.github.com/users/akulchik/orgs", "received_events_url": "https://api.github.com/users/akulchik/received_events", "repos_url": "https://api.github.com/users/akulchik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akulchik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akulchik/subscriptions", "type": "User", "url": "https://api.github.com/users/akulchik" }
[]
closed
false
null
[]
null
[]
"2022-02-17T10:50:08Z"
"2022-02-17T15:32:02Z"
"2022-02-17T15:32:02Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3742.diff", "html_url": "https://github.com/huggingface/datasets/pull/3742", "merged_at": "2022-02-17T15:32:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3742.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3742" }
Hi! I bumped into this particular `ValueError` during my work (because an instance of `np.int64` was passed instead of regular Python `int`), and so I had to `print(type(values))` myself. Apparently, it's just the missing `f` to make message an f-string. It ain't much for a contribution, but it's honest work. Hope it spares someone else a few seconds in the future 😃
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3742/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3742/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2616/comments
https://api.github.com/repos/huggingface/datasets/issues/2616/events
https://github.com/huggingface/datasets/pull/2616
940,799,038
MDExOlB1bGxSZXF1ZXN0Njg2ODE3NjYz
2,616
Support remote data files
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
[ "@lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?", "> @lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?\r\n\r\nSure ! We can get the ETag with\r\n```python\r\nheaders = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) # auth for private repos\r\netag = http_head(url, headers=headers).headers.get(\"ETag\")\r\n```\r\n\r\nSince the computation of the `config_id` is done in the `DatasetBuilder.__init__`, then this means that we need to add a new parameter `use_auth_token` in `DatasetBuilder.__init__`\r\n\r\nDoes that sound good ? We can add this in a following PR" ]
"2021-07-09T14:07:38Z"
"2021-07-09T16:13:41Z"
"2021-07-09T16:13:41Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2616.diff", "html_url": "https://github.com/huggingface/datasets/pull/2616", "merged_at": "2021-07-09T16:13:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2616.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2616" }
Add support for (streaming) remote data files: ```python data_files = f"https://huggingface.co/datasets/{repo_id}/resolve/main/{relative_file_path}" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) ``` cc: @thomwolf
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2616/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2616/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3743/comments
https://api.github.com/repos/huggingface/datasets/issues/3743/events
https://github.com/huggingface/datasets/pull/3743
1,141,176,011
PR_kwDODunzps4y-2Do
3,743
initial monash time series forecasting repository
{ "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kashif", "id": 8100, "login": "kashif", "node_id": "MDQ6VXNlcjgxMDA=", "organizations_url": "https://api.github.com/users/kashif/orgs", "received_events_url": "https://api.github.com/users/kashif/received_events", "repos_url": "https://api.github.com/users/kashif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "type": "User", "url": "https://api.github.com/users/kashif" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The CI fails are unrelated to this PR, merging !", "thanks 🙇🏽 " ]
"2022-02-17T10:51:31Z"
"2022-03-21T09:54:41Z"
"2022-03-21T09:50:16Z"
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3743.diff", "html_url": "https://github.com/huggingface/datasets/pull/3743", "merged_at": "2022-03-21T09:50:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/3743.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3743" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3743/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3743/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3795/comments
https://api.github.com/repos/huggingface/datasets/issues/3795/events
https://github.com/huggingface/datasets/issues/3795
1,153,261,281
I_kwDODunzps5EvV7h
3,795
can not flatten natural_questions dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hannibal046", "id": 38466901, "login": "Hannibal046", "node_id": "MDQ6VXNlcjM4NDY2OTAx", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "repos_url": "https://api.github.com/users/Hannibal046/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "type": "User", "url": "https://api.github.com/users/Hannibal046" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "same issue. downgrade it to a lower version.", "Thanks for reporting, I'll take a look tomorrow :)" ]
"2022-02-27T13:57:40Z"
"2022-03-21T14:36:12Z"
"2022-03-21T14:36:12Z"
NONE
null
null
null
## Describe the bug after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir') dataset['train'].flatten() ``` ## Expected results a dataset with `long_answer` as features ## Actual results Traceback (most recent call last): File "temp.py", line 5, in <module> dataset['train'].flatten() File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper out = func(self, *args, **kwargs) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten dataset._data = update_metadata_with_features(dataset._data, dataset.features) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features features = Features({col_name: features[col_name] for col_name in table.column_names}) File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp> features = Features({col_name: features[col_name] for col_name in table.column_names}) KeyError: 'annotations.long_answer' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.13 - Platform: MBP - Python version: 3.8 - PyArrow version: 6.0.1
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3795/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3795/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2/comments
https://api.github.com/repos/huggingface/datasets/issues/2/events
https://github.com/huggingface/datasets/issues/2
599,767,671
MDU6SXNzdWU1OTk3Njc2NzE=
2
Issue to read a local dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
[ "My first bug report ❤️\r\nLooking into this right now!", "Ok, there are some news, most good than bad :laughing: \r\n\r\nThe dataset script now became:\r\n```python\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def __init__(self, **config):\r\n self.train = config.pop(\"train\", None)\r\n self.validation = config.pop(\"validation\", None)\r\n super(Bbc, self).__init__(**config)\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(builder=self, description=\"bla\", features=nlp.features.FeaturesDict({\"id\": nlp.int32, \"text\": nlp.string, \"label\": nlp.string}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"filepath\": self.train}),\r\n nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={\"filepath\": self.validation})]\r\n\r\n def _generate_examples(self, filepath):\r\n with open(filepath) as f:\r\n reader = csv.reader(f, delimiter=',', quotechar=\"\\\"\")\r\n lines = list(reader)[1:]\r\n\r\n for idx, line in enumerate(lines):\r\n yield idx, {\"id\": idx, \"text\": line[1], \"label\": line[0]}\r\n\r\n```\r\n\r\nAnd the dataset folder becomes:\r\n```\r\n.\r\n├── bbc\r\n│ ├── bbc.py\r\n│ └── data\r\n│ ├── test.csv\r\n│ └── train.csv\r\n```\r\nI can load the dataset by using the keywords arguments like this:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\"})\r\n```\r\n\r\nThat was the good part ^^ Because it took me some time to understand that the script itself is put in cache in `datasets/src/nlp/datasets/some-hash/bbc.py` which is very difficult to discover without checking the source code. It means that doesn't matter the changes you do to your original script it is taken into account. I think instead of doing a hash on the name (I suppose it is the name), a hash on the content of the script itself should be a better solution.\r\n\r\nThen by diving a bit in the code I found the `force_reload` parameter [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L50) but the call of this `load_dataset` method is done with the `builder_kwargs` as seen [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L166) which is ok until the call to the builder is done as the builder do not have this `force_reload` parameter. To show as example, the previous load becomes:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\", \"force_reload\": True})\r\n```\r\nRaises\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 283, in load\r\n dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 170, in builder\r\n builder_instance = builder_cls(**builder_kwargs)\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/datasets/84d638d2a8ca919d1021a554e741766f50679dc6553d5a0612b6094311babd39/bbc.py\", line 12, in __init__\r\n super(Bbc, self).__init__(**config)\r\nTypeError: __init__() got an unexpected keyword argument 'force_reload'\r\n```\r\nSo yes the cache is refreshed with the new script but then raises this error.", "Ok great, so as discussed today, let's:\r\n- have a main dataset directory inside the lib with sub-directories hashed by the content of the file\r\n- keep a cache for downloading the scripts from S3 for now\r\n- later: add methods to list and clean the local versions of the datasets (and the distant versions on S3 as well)\r\n\r\nSide question: do you often use `builder_kwargs` for other things than supplying file paths? I was thinking about having a more easy to read and remember `data_files` argument maybe.", "Good plan!\r\n\r\nYes I do use `builder_kwargs` for other things such as:\r\n- dataset name\r\n- properties to know how to properly read a CSV file: do I have to skip the first line in a CSV, which delimiter is used, and the columns ids to use.\r\n- properties to know how to properly read a JSON file: which properties in a JSON object to read", "Done!" ]
"2020-04-14T18:18:51Z"
"2020-05-11T18:55:23Z"
"2020-05-11T18:55:22Z"
CONTRIBUTOR
null
null
null
Hello, As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following: ```python import os import csv import nlp class BbcConfig(nlp.BuilderConfig): def __init__(self, **kwargs): super(BbcConfig, self).__init__(**kwargs) class Bbc(nlp.GeneratorBasedBuilder): _DIR = "./data" _DEV_FILE = "test.csv" _TRAINING_FILE = "train.csv" BUILDER_CONFIGS = [BbcConfig(name="bbc", version=nlp.Version("1.0.0"))] def _info(self): return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({"id": nlp.string, "text": nlp.string, "label": nlp.string})) def _split_generators(self, dl_manager): files = {"train": os.path.join(self._DIR, self._TRAINING_FILE), "dev": os.path.join(self._DIR, self._DEV_FILE)} return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": files["train"]}), nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": files["dev"]})] def _generate_examples(self, filepath): with open(filepath) as f: reader = csv.reader(f, delimiter=',', quotechar="\"") lines = list(reader)[1:] for idx, line in enumerate(lines): yield idx, {"idx": idx, "text": line[1], "label": line[0]} ``` The dataset is attached to this issue as well: [data.zip](https://github.com/huggingface/datasets/files/4476928/data.zip) Now the steps to reproduce what I would like to do: 1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible) 2. create the `bbc.py` script as above at the same location than the unziped `data` folder. Now I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS: ```python import nlp from bbc import Bbc dataset = nlp.load("bbc") ``` I get: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder builder_cls = load_dataset(path, name=name, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 88, in load_dataset local_files_only=local_files_only, File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/utils/file_utils.py", line 214, in cached_path if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path): File "/opt/anaconda3/envs/transformers/lib/python3.7/zipfile.py", line 203, in is_zipfile with open(filename, "rb") as fp: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` But @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do: ```python import nlp dataset = nlp.load("bbc.py") ``` And ```python import nlp dataset = nlp.load("./bbc.py") ``` And ```python import nlp dataset = nlp.load("/absolute/path/to/bbc.py") ``` These three ways gives me: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder builder_cls = load_dataset(path, name=name, **builder_kwargs) File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 124, in load_dataset dataset_module = importlib.import_module(module_path) File "/opt/anaconda3/envs/transformers/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked ModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc' ``` Any idea of what I'm missing? or I might have spot a bug :)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6317/comments
https://api.github.com/repos/huggingface/datasets/issues/6317/events
https://github.com/huggingface/datasets/issues/6317
1,951,965,668
I_kwDODunzps50WKHk
6,317
sentiment140 dataset unavailable
{ "avatar_url": "https://avatars.githubusercontent.com/u/52670382?v=4", "events_url": "https://api.github.com/users/AndreasKarasenko/events{/privacy}", "followers_url": "https://api.github.com/users/AndreasKarasenko/followers", "following_url": "https://api.github.com/users/AndreasKarasenko/following{/other_user}", "gists_url": "https://api.github.com/users/AndreasKarasenko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AndreasKarasenko", "id": 52670382, "login": "AndreasKarasenko", "node_id": "MDQ6VXNlcjUyNjcwMzgy", "organizations_url": "https://api.github.com/users/AndreasKarasenko/orgs", "received_events_url": "https://api.github.com/users/AndreasKarasenko/received_events", "repos_url": "https://api.github.com/users/AndreasKarasenko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AndreasKarasenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreasKarasenko/subscriptions", "type": "User", "url": "https://api.github.com/users/AndreasKarasenko" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting. We are investigating the issue.", "We have opened an issue in the corresponding Hub dataset: https://huggingface.co/datasets/sentiment140/discussions/3\r\n\r\nLet's continue the discussion there." ]
"2023-10-19T11:25:21Z"
"2023-10-19T13:04:56Z"
"2023-10-19T13:04:56Z"
NONE
null
null
null
### Describe the bug loading the dataset using load_dataset("sentiment140") returns the following error ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403) ### Steps to reproduce the bug Run the following code (version should not matter). ``` from datasets import load_dataset data = load_dataset("sentiment140") ``` ### Expected behavior The dataset should be loaded just like any other. The main issue is that it is no longer hosted by stanford. It is still available from a [Google Drive Link](https://docs.google.com/file/d/0B04GJPshIjmPRnZManQwWEdTZjg/edit). ### Environment info - `datasets` version: 2.14.5 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.8 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6317/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6317/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1796
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1796/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1796/comments
https://api.github.com/repos/huggingface/datasets/issues/1796/events
https://github.com/huggingface/datasets/issues/1796
797,329,905
MDU6SXNzdWU3OTczMjk5MDU=
1,796
Filter on dataset too much slowww
{ "avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4", "events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}", "followers_url": "https://api.github.com/users/ayubSubhaniya/followers", "following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}", "gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ayubSubhaniya", "id": 20911334, "login": "ayubSubhaniya", "node_id": "MDQ6VXNlcjIwOTExMzM0", "organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs", "received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events", "repos_url": "https://api.github.com/users/ayubSubhaniya/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions", "type": "User", "url": "https://api.github.com/users/ayubSubhaniya" }
[]
open
false
null
[]
null
[ "When I use the filter on the arrow table directly, it works like butter. But I can't find a way to update the table in `Dataset` object.\r\n\r\n```\r\nds_table = dataset.data.filter(mask=dataset['flag'])\r\n```", "@thomwolf @lhoestq can you guys please take a look and recommend some solution.", "Hi ! Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.\r\nUsing a mask directly on the arrow table doesn't do any read or write operation therefore it's way quicker.\r\n\r\nReplacing the old table by the new one should do the job:\r\n```python\r\ndataset._data = dataset._data.filter(...)\r\n```\r\n\r\nNote: this is a **workaround** and in general users shouldn't have to do that. In particular if you did some `shuffle` or `select` before that then it would not work correctly since the indices mapping (index from `__getitem__` -> index in the table) would not be valid anymore. But if you haven't done any `shuffle`, `select`, `shard`, `train_test_split` etc. then it should work.\r\n\r\nIdeally it would be awesome to update the filter function to allow masking this way !\r\nIf you would like to give it a shot I will be happy to help :) ", "Yes, would be happy to contribute. Thanks", "Hi @lhoestq @ayubSubhaniya,\r\n\r\nIf there's no progress on this one, can I try working on it?\r\n\r\nThanks,\r\nGunjan", "Sure @gchhablani feel free to start working on it, this would be very appreciated :)\r\nThis feature is would be really awesome, especially since arrow allows to mask really quickly and without having to rewrite the dataset on disk" ]
"2021-01-30T04:09:19Z"
"2021-02-18T17:09:24Z"
null
NONE
null
null
null
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter sequences based on a boolean column. Below are the variants I tried. 1. filter() with batch size 1024, single process (takes roughly 3 hr) 2. filter() with batch size 1024, 96 processes (takes 5-6 hrs ¯\\\_(ツ)\_/¯) 3. filter() with loading all data in memory, only a single boolean column (never ends). Can someone please help? Below is a sample code for small dataset. ``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') dataset = dataset.map(lambda x: {'flag': random.randint(0,1)==1}) def _amplify(data): return data dataset = dataset.filter(_amplify, batch_size=1024, keep_in_memory=False, input_columns=['flag']) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1796/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1796/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5798/comments
https://api.github.com/repos/huggingface/datasets/issues/5798/events
https://github.com/huggingface/datasets/issues/5798
1,685,904,526
I_kwDODunzps5kfNyO
5,798
Support parallelized downloading and processing in load_dataset with Spark
{ "avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4", "events_url": "https://api.github.com/users/es94129/events{/privacy}", "followers_url": "https://api.github.com/users/es94129/followers", "following_url": "https://api.github.com/users/es94129/following{/other_user}", "gists_url": "https://api.github.com/users/es94129/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/es94129", "id": 12763339, "login": "es94129", "node_id": "MDQ6VXNlcjEyNzYzMzM5", "organizations_url": "https://api.github.com/users/es94129/orgs", "received_events_url": "https://api.github.com/users/es94129/received_events", "repos_url": "https://api.github.com/users/es94129/repos", "site_admin": false, "starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/es94129/subscriptions", "type": "User", "url": "https://api.github.com/users/es94129" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi ! We're using process pools for parallelism right now. I was wondering if there's a package that implements the same API as a process pool but runs with Spark under the hood ? That or something similar would be cool because users could use whatever distributed framework they want this way.\r\n\r\nFeel free to ping us when you'd like to open PRs for this kind of things, so that we can discuss this before you start working on it ^^", "Hi, thanks for taking a look and providing your input! I don't know of such packages, and even it exists, I don't think with the process pool API it's possible to run Spark as backend properly; otherwise I understand a unified API would be preferable.\r\n\r\nThe process pool API requires splitting the workload to a fixed number parts for multiprocessing; meanwhile distributed framework such as Spark has sophisticated scheduler to distribute the workload to the processes on multiple machines in a cluster, so the way of splitting things for `multiprocessing.pool` would not suit / be as flexible as directly calling the `sparkContext.parallelize` API.\r\n\r\nI think this could be a good addition to scale the `datasets` implementation to distributed workers, and from my benchmark results so far it looks promising compared with multiprocessing.", "I see ! I think we only need an equivalent of `pool.map`. We use it to run download and conversion of data files on disk. That would require less changes in the internal code - and therefore less tests to write ;)\r\n\r\nWe also use `pool.apply_async` in some places with a `Queue` to get progress updates of the running jobs. I'm mentioning this in case there's a way to get a python generator from a running spark job ? This is less important though", "For Spark, `rdd.map` (where `rdd` can be created by `sparkContext.parallelize`) is the most similar as `pool.map`, but it requires creating a Spark RDD first that is used for distributing the `iterable` and the actual parallelization is managed by the Spark framework; `pool.map` takes the splits of `iterable` that are split into `num_proc` parts by the Python code. You can also check my PR #5807 in the `src/datasets/utils/py_utils.py` file to compare the differences of the APIs, it might make more sense than the the above description.\r\n\r\nGiven the different inputs and mechanisms of calling the `map` functions, this is why I think it's not that feasible to reuse most of the `multiprocessing` code.\r\n\r\nProgress bar updating might be challenging with Spark, I'll consider it as a followup work.", "Indeed I think the current use of multiprocessing.Pool in `map_nested` can be rewritten to work like `sparkContext.parallelize` - without splitting the iterable.\r\n\r\nMaybe from the user's perspective it's ok to let multiprocessing.Pool or spark distribute the load on their own, as long as it takes a list and runs jobs in parallel in the end :)\r\n", "From your feedback, seems to me there are two paths to consider now for supporting spark's `map` function in `map_nested` now:\r\n1. Keep the current `pool.map` implementation, and add an if statement for the spark's `map` code (which is what I did in my current PR) -- the code change is just a few lines in the `map_nested` function, and it has been tested by unit tests + manual testing on real Spark clusters; if you have other concerns I'd also be happy to address them.\r\n2. Rewrite the current `pool.map` implementation to remove splitting the iterable, and we will still need to add an if statement to use either\r\n```python\r\nwith Pool(...) as pool:\r\n mapped = pool.map(_single_map_nested, iterable)\r\n```\r\nor\r\n```python\r\nrdd = spark.sparkContext.parallelize(iterable)\r\nmapped = rdd.map(lambda obj: _single_map_nested((function, obj, types, None, True, None))).collect()\r\n```\r\nbecause there is no unified API that supports both `pool.map` and `rdd.map`. This can be more unified and flexible in the long run, but might require more work, and it will change the existing multiprocessing behavior, which is why I'm not leaning towards this option.\r\n\r\nAm I understanding correctly?", "Yup correct ! I think it's a nice path because it would be possible for users to define whatever parallel processing backend they want. I think we still need to discuss how that would look like in the `datasets` API : how to specify it has to use the \"spark\" parallel backend ? And how to specify the spark session parameters (number of executors etc.) ? Maybe there is something more practical than `use_spark=True`\r\n\r\nI'll check with the team internally if they have some ideas, but feel free to share your thoughts here !", "Sure, please let me know if you have more updates regarding the API and implementation from the team.\r\n\r\nFor parameters we don't need to worry about setting them for Spark, because Spark will figure out the environment / number of worker nodes by itself, so it's preferable to just provide some parameter such as `use_spark` to use the RDD `map` function.", "Hi! I wanted to check in to see if there is any update from the team.\r\n\r\nA potential change of API I can think of is change the argument to `distributed_backend=...`, which accepts `str`, such as `load_dataset(..., distributed_backend=\"spark\")`.\r\n\r\nImplementation wise, we can add a class / function to abstract away the details of using multiprocessing vs. spark vs. other parallel processing frameworks in `map_nested` and `_prepare_split`.", "I found this quite interesting: https://github.com/joblib/joblib-spark with this syntax:\r\n\r\n```python\r\nwith parallel_backend('spark', n_jobs=3):\r\n ...\r\n```\r\n\r\ncc @lu-wang-dl who might know better", "Joblib spark is providing Spark backend for joblib. We can implement a general parallel backend like\r\n```\r\nwith parallel_backend(\"<parallel-backedn>\", n_jobs=..):\r\n```\r\n\r\nIt can support multiprocessing , spark, ray, and etc. https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend", "Thank you @lhoestq for finding this repo. I validated that it can distribute downloading jobs with Spark to arbitrary cluster worker nodes evenly with `n_jobs=-1`.\r\n\r\nFor the API, I think it makes sense to define it as\r\n```python\r\nload_dataset(..., parallel_backend=<str>)\r\n```\r\nwhere `parallel_backend` can be `spark`, `multiprocessing`, and potentially other supported joblib backends including `ray` and `dask`.\r\n\r\nImplementation-wise, do you think it is better to just use `joblib` for `spark` backend in `map_nested`, or also migrate the `multiprocessing.Pool` code to use `joblib`?", "Hello @lhoestq, I wanted to follow up on my previous comment with some prototyping code that demonstrates how `map_nested` would be like if we unify `multiprocessing` and `spark` with `joblib`. The snippet hasn't hashed out the details such as dealing with `tqdm` yet.\r\n\r\nIn terms of API, the way of using multiprocessing is still the same; for Spark, the user sets `parallel_backend='spark'` can reuse the `num_proc` argument to pass in the number of executors, or preferably, just set `num_proc=-1` and joblib is able to decide it (I've validated it by running it on a Spark cluster).\r\n\r\n```python\r\ndef map_nested(\r\n # ... same args\r\n parallel_backend: Optional[str] = None, # proposed new argument\r\n):\r\n\r\n # ... same code\r\n\r\n # allow user to specify num_proc=-1, so that joblib will optimize it\r\n if (num_proc <= 1 and num_proc != -1) or len(iterable) < parallel_min_length:\r\n # same code\r\n mapped = [\r\n _single_map_nested((function, obj, types, None, True, None))\r\n for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n ]\r\n else:\r\n if not parallel_backend:\r\n parallel_backend = 'loky' # 'loky' is joblib's own implementation of robust multiprocessing\r\n \r\n n_jobs = min(num_proc, len(iterable))\r\n\r\n if parallel_backend == 'spark':\r\n n_jobs = -1 # 'loky' is joblib's own implementation of robust multiprocessing\r\n from joblibspark import register_spark\r\n register_spark()\r\n\r\n # parallelized with the same API\r\n with joblib.parallel_backend(parallel_backend, n_jobs=n_jobs):\r\n mapped = joblib.Parallel()(\r\n joblib.delayed(\r\n _single_map_nested((function, obj, types, None, True, None))\r\n )(obj) for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n )\r\n \r\n # ... same code\r\n```\r\nWe can always `joblib` for Spark and other distributed backends such as Ray if people want to support them later. It's worth noting that some distributed backends do not currently have `joblib` implementations.\r\n\r\nI would appreciate your thoughts on this proposed new API. We can also discuss the pros and cons of migrating the `multiprocessing` code to `joblib` later.", "Nice ! It should be quite easy to make the change then :)\r\n\r\nI think adding spark support can actually be less than 20 lines of code and would roughly require one line of code to change in map_nested:\r\n\r\nMaybe we can define a new `datasets.parallel` submodule that has the `parallel_backend()` context manager and a `parallel_map()` function that uses `Pool.map` by default and `joblib` otherwise.\r\n\r\n`joblib` would be an optional dependency, and `joblib-spark` as well.\r\n\r\nThen whenever someone wants to use Spark, they can do something like this (similar to scikit-learn parallel_backend):\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\"):\r\n ds = load_dataset(...)\r\n```\r\n\r\nWhat do you think ?", "Although until we've switched to all the steps in `load_dataset` to use `datasets.parallel`, I would require the user to explicitly say which step should use Spark. Maybe something like this, but I'm not sure yet:\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\"]):\r\n ds = load_dataset(...)\r\n```\r\nfor now some steps can be NotImplemented:\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\", \"prepare\"]):\r\n# NotImplementedError: the \"prepare\" step that converts the raw data files to Arrow is not compatible with the \"spark\" backend yet\r\n```\r\n\r\nThis way we can progressively roll out Spark support for the other data loading/processing steps without breaking changes between `datasets` versions", "Sounds good! I like the partial rollout idea.\r\nSo for example `map_nested` would call `parallel_map` under the hood if `num_proc != 1` or `parallel_backend` is specified right?\r\nI would be happy to start a PR next week to explore this path.", "Awesome ! I think map_nested can call `parallel_map()` if num_proc > 1, and `parallel_map` can be responsible to use Pool.map by default or joblib." ]
"2023-04-27T00:16:11Z"
"2023-05-25T14:11:41Z"
null
CONTRIBUTOR
null
null
null
### Feature request When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes. ```python load_dataset(..., use_spark=True) ``` ### Motivation Further speed up `dl_manager.download` and `_prepare_split` by distributing the workloads to worker nodes. ### Your contribution I can submit a PR to support this.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5798/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5798/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6124/comments
https://api.github.com/repos/huggingface/datasets/issues/6124/events
https://github.com/huggingface/datasets/issues/6124
1,837,868,112
I_kwDODunzps5ti6RQ
6,124
Datasets crashing runs due to KeyError
{ "avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4", "events_url": "https://api.github.com/users/conceptofmind/events{/privacy}", "followers_url": "https://api.github.com/users/conceptofmind/followers", "following_url": "https://api.github.com/users/conceptofmind/following{/other_user}", "gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/conceptofmind", "id": 25208228, "login": "conceptofmind", "node_id": "MDQ6VXNlcjI1MjA4MjI4", "organizations_url": "https://api.github.com/users/conceptofmind/orgs", "received_events_url": "https://api.github.com/users/conceptofmind/received_events", "repos_url": "https://api.github.com/users/conceptofmind/repos", "site_admin": false, "starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions", "type": "User", "url": "https://api.github.com/users/conceptofmind" }
[]
closed
false
null
[]
null
[ "i once had the same error and I could fix that by pushing a fake or a dummy commit on my hugging face dataset repo", "Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?", "> Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?\r\n\r\nHi Mario,\r\n\r\nUnfortunately, the dataset in question is currently private until the model is trained and released.\r\n\r\nThis is not happening with one dataset but numerous hosted private datasets.\r\n\r\nI am only loading the dataset and doing nothing else currently. It seems to happen completely sporadically.\r\n\r\nThank you,\r\n\r\nEnrico", "Hi,\r\n\r\nI have the same error in the dataset viewer with my dataset\r\nhttps://huggingface.co/datasets/elsaEU/ELSA10M_track1\r\n\r\nHas anyone solved this issue?\r\n\r\nEdit: After a dummy commit the error changed in ConfigNamesError", "@rs9000 The problem seems to be the (large) number of commits, as explained in https://huggingface.co/docs/hub/repositories-recommendations. This can be fixed by running:\r\n```python\r\nimport huggingface_hub\r\nhuggingface_hub.super_squash_history(repo_id=\"elsaEU/ELSA10M_track1\")\r\n``` \r\n\r\nThe issue stems from `push_to_hub` creating one commit per shard - https://github.com/huggingface/datasets/pull/6269 should fix this issue (will create one commit per 50 uploaded shards by default). The linked PR will be included in the next `datasets` release.\r\n\r\n\r\ncc @lhoestq @severo for visibility", "Thank you @mariosasko it works.", "#6269 has been merged, so I'm closing this issue" ]
"2023-08-05T17:48:56Z"
"2023-11-30T16:28:57Z"
"2023-11-30T16:28:57Z"
NONE
null
null
null
### Describe the bug Hi all, I have been running into a pretty persistent issue recently when trying to load datasets. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` I receive a KeyError which crashes the runs. ``` Traceback (most recent call last): main() train_dataset = load_dataset( ^^^^^^^^^^^^^ builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ raise e1 from None ).get_module() ^^^^^^^^^^^^ else get_data_patterns(base_path, download_config=self.download_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ return _get_data_files_patterns(resolver) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ data_files = pattern_resolver(pattern) ^^^^^^^^^^^^^^^^^^^^^^^^^ fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)] ^^^^^^^^^^^^^^ allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs): listing = self.ls(path, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ "last_modified": parse_datetime(tree_item["lastCommit"]["date"]), ~~~~~~~~~^^^^^^^^^^^^^^ KeyError: 'lastCommit' ``` Any help would be greatly appreciated. Thank you, Enrico ### Steps to reproduce the bug Load the dataset from the Huggingface hub. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` ### Expected behavior Loads the dataset. ### Environment info datasets-2.14.3 CUDA 11.8 Python 3.11
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6124/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4675/comments
https://api.github.com/repos/huggingface/datasets/issues/4675/events
https://github.com/huggingface/datasets/issues/4675
1,302,193,649
I_kwDODunzps5NneXx
4,675
Unable to use dataset with PyTorch dataloader
{ "avatar_url": "https://avatars.githubusercontent.com/u/25421460?v=4", "events_url": "https://api.github.com/users/BlueskyFR/events{/privacy}", "followers_url": "https://api.github.com/users/BlueskyFR/followers", "following_url": "https://api.github.com/users/BlueskyFR/following{/other_user}", "gists_url": "https://api.github.com/users/BlueskyFR/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BlueskyFR", "id": 25421460, "login": "BlueskyFR", "node_id": "MDQ6VXNlcjI1NDIxNDYw", "organizations_url": "https://api.github.com/users/BlueskyFR/orgs", "received_events_url": "https://api.github.com/users/BlueskyFR/received_events", "repos_url": "https://api.github.com/users/BlueskyFR/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BlueskyFR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BlueskyFR/subscriptions", "type": "User", "url": "https://api.github.com/users/BlueskyFR" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi! `para_crawl` has a single column of type `Translation`, which stores translation dictionaries. These dictionaries can be stored in a NumPy array but not in a PyTorch tensor since PyTorch only supports numeric types. In `datasets`, the conversion to `torch` works as follows: \r\n1. convert PyArrow table to NumPy arrays \r\n2. convert NumPy arrays to Torch tensors. \r\n\r\nThe 2nd step is problematic for your case as `datasets` attempts to convert the array of dictionaries to a PyTorch tensor. One way to fix this is to use the [preprocessing logic](https://github.com/huggingface/transformers/blob/8581a798c0a48fca07b29ce2ca2ef55adcae8c7e/examples/pytorch/translation/run_translation.py#L440-L458) from the Transformers translation script. And on our side, I think we can replace a NumPy array of dicts with a dict of NumPy array if the feature type is `Translation`/`TranslationVariableLanguages` (one array for each language) to get the official PyTorch error message for strings in such case." ]
"2022-07-12T15:04:04Z"
"2022-07-14T14:17:46Z"
null
NONE
null
null
null
## Describe the bug When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below. ## Steps to reproduce the bug ```python from datasets import load_dataset from torch.utils.data import DataLoader ds = load_dataset( "para_crawl", name="enfr", cache_dir="/tmp/test/", split="train", keep_in_memory=True, ) dataloader = DataLoader(ds.with_format("torch"), num_workers=32) print(next(iter(dataloader))) ``` Is there something I am doing wrong? The documentation does not say much about the behavior of `.with_format()` so I feel like I am a bit stuck here :-/ Thanks in advance for your help! ## Expected results The code should run with no error ## Actual results ``` AttributeError: 'str' object has no attribute 'dtype' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4675/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/417
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/417/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/417/comments
https://api.github.com/repos/huggingface/datasets/issues/417/events
https://github.com/huggingface/datasets/pull/417
661,804,054
MDExOlB1bGxSZXF1ZXN0NDUzNDMyODE5
417
Fix docstrins multiple metrics instances
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
"2020-07-20T13:08:59Z"
"2020-07-22T09:51:00Z"
"2020-07-22T09:50:59Z"
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/417.diff", "html_url": "https://github.com/huggingface/datasets/pull/417", "merged_at": "2020-07-22T09:50:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/417.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/417" }
We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated). This should fix #304
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/417/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/417/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2969/comments
https://api.github.com/repos/huggingface/datasets/issues/2969/events
https://github.com/huggingface/datasets/issues/2969
1,007,217,867
I_kwDODunzps48COzL
2,969
medical-dialog error
{ "avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4", "events_url": "https://api.github.com/users/smeyerhot/events{/privacy}", "followers_url": "https://api.github.com/users/smeyerhot/followers", "following_url": "https://api.github.com/users/smeyerhot/following{/other_user}", "gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/smeyerhot", "id": 43877130, "login": "smeyerhot", "node_id": "MDQ6VXNlcjQzODc3MTMw", "organizations_url": "https://api.github.com/users/smeyerhot/orgs", "received_events_url": "https://api.github.com/users/smeyerhot/received_events", "repos_url": "https://api.github.com/users/smeyerhot/repos", "site_admin": false, "starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions", "type": "User", "url": "https://api.github.com/users/smeyerhot" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Hi @smeyerhot, thanks for reporting.\r\n\r\nYou are right: there is an issue with the dataset metadata. I'm fixing it.\r\n\r\nIn the meantime, you can circumvent the issue by passing `ignore_verifications=True`:\r\n```python\r\nraw_datasets = load_dataset(\"medical_dialog\", \"en\", split=\"train\", download_mode=\"force_redownload\", data_dir=\"./Medical-Dialogue-Dataset-English\", ignore_verifications=True)\r\n```" ]
"2021-09-25T23:08:44Z"
"2021-10-11T07:46:42Z"
"2021-10-11T07:46:42Z"
NONE
null
null
null
## Describe the bug A clear and concise description of what the bug is. When I attempt to download the huggingface datatset medical_dialog it errors out midway through ## Steps to reproduce the bug ```python raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English") ``` ## Expected results A clear and concise description of the expected results. No error ## Actual results ``` 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits) 72 ] 73 if len(bad_splits) > 0: ---> 74 raise NonMatchingSplitsSizesError(str(bad_splits)) 75 logger.info("All the splits matched successfully.") 76 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}] ``` Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.21.1 - Platform: colab - Python version: colab 3.7 - PyArrow version: N/A
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2969/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2969/timeline
null
completed
false