url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.08B
1.73B
| node_id
stringlengths 18
19
| number
int64 3.45k
5.9k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 2
36.2k
โ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3548/comments | https://api.github.com/repos/huggingface/datasets/issues/3548/events | https://github.com/huggingface/datasets/issues/3548 | 1,096,409,512 | I_kwDODunzps5BWeGo | 3,548 | Specify the feature types of a dataset on the Hub without needing a dataset script | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "abidlabs",
"id": 1778297,
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abidlabs",
"html_url": "https://github.com/abidlabs",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abidlabs",
"id": 1778297,
"node_id": "MDQ6VXNlcjE3NzgyOTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abidlabs",
"html_url": "https://github.com/abidlabs",
"followers_url": "https://api.github.com/users/abidlabs/followers",
"following_url": "https://api.github.com/users/abidlabs/following{/other_user}",
"gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions",
"organizations_url": "https://api.github.com/users/abidlabs/orgs",
"repos_url": "https://api.github.com/users/abidlabs/repos",
"events_url": "https://api.github.com/users/abidlabs/events{/privacy}",
"received_events_url": "https://api.github.com/users/abidlabs/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13\r\n\r\nThis should be probably be documented, though. "
] | 2022-01-07T15:17:06 | 2022-01-20T14:48:38 | 2022-01-20T14:48:38 | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio.
**Describe the solution you'd like**
I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the features types I want.
The feature types could read from the `dataset_infos.json` for example.
**Describe alternatives you've considered**
Create a dataset script to specify the features, but that seems complicated for a simple thing.
cc @abidlabs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3548/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3548/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3547/comments | https://api.github.com/repos/huggingface/datasets/issues/3547/events | https://github.com/huggingface/datasets/issues/3547 | 1,096,405,515 | I_kwDODunzps5BWdIL | 3,547 | Datasets created with `push_to_hub` can't be accessed in offline mode | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it",
"Hi, I'm having the same issue. Is there any update on this?",
"We haven't had a chance to fix this yet. If someone would like to give it a try I'd be happy to give some guidance",
"@lhoestq Do you have an idea of what changes need to be made to `CachedDatasetModuleFactory`? I would be willing to take a crack at it. Currently unable to train with datasets I have `push_to_hub` on a cluster whose compute nodes are not connected to the internet.\r\n\r\nIt looks like it might be this line:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L994\r\n\r\nWhich wouldn't pick up the stuff saved under `\"datasets/allenai___parquet/*\"`. Additionally, the datasets saved under `\"datasets/allenai___parquet/*\"` appear to have hashes in their name, e.g. `\"datasets/allenai___parquet/my_dataset-def9ee5552a1043e\"`. This would not be detected by `CachedDatasetModuleFactory`, which currently looks for subdirectories\r\n\r\nhttps://github.com/huggingface/datasets/blob/0c1d099f87a883e52c42d3fd1f1052ad3967e647/src/datasets/load.py#L995-L999",
"`importable_directory_path` is used to find a **dataset script** that was previously downloaded and cached from the Hub\r\n\r\nHowever in your case there's no dataset script on the Hub, only parquet files. So the logic must be extended for this case.\r\n\r\nIn particular I think you can add a new logic in the case where `hashes is None` (i.e. if there's no dataset script associated to the dataset in the cache).\r\n\r\nIn this case you can check directly in the in the datasets cache for a directory named `<namespace>__parquet` and a subdirectory named `<config_id>`. The config_id must match `{self.name.replace(\"/\", \"--\")}-*`. \r\n\r\nIn your case those two directories correspond to `allenai___parquet` and then `allenai--my_dataset-def9ee5552a1043e`\r\n\r\nThen you can find the most recent version of the dataset in subdirectories (e.g. sorting using the last modified time of the `dataset_info.json` file).\r\n\r\nFinally, we will need return the module that is used to load the dataset from the cache. It is the same module than the one that would have been normally used if you had an internet connection.\r\n\r\nAt that point you can ping me, because we will need to pass all this:\r\n- `module_path = _PACKAGED_DATASETS_MODULES[\"parquet\"][0]`\r\n- `hash` it corresponds the name of the directory that contains the .arrow file, inside `<namespace>__parquet/<config_id>`\r\n- ` builder_kwargs = {\"hash\": hash, \"repo_id\": self.name, \"config_id\": config_id}`\r\nand currently `config_id` is not a valid argument for a `DatasetBuilder`\r\n\r\nI think in the future we want to change this caching logic completely, since I don't find it super easy to play with.",
"Hi! Is there a workaround for the time being?\r\nLike passing `data_dir` or something like that?\r\n\r\nI would like to use [this diffuser example](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation) on my cluster whose nodes are not connected to the internet. I have downloaded the dataset online form the login node.",
"Hi ! Yes you can save your dataset locally with `my_dataset.save_to_disk(\"path/to/local\")` and reload it later with `load_from_disk(\"path/to/local\")`\r\n\r\n(removing myself from assignees since I'm currently not working on this right now)",
"Still not fixed? ......"
] | 2022-01-07T15:12:25 | 2023-05-10T13:09:57 | null | MEMBER | null | null | null | ## Describe the bug
In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`.
## Steps to reproduce the bug
in Python:
```
import datasets
mpwiki = datasets.load_dataset("teven/matched_passages_wikidata")
```
in bash:
```
export HF_DATASETS_OFFLINE=1
```
in Python:
```
import datasets
mpwiki = datasets.load_dataset("teven/matched_passages_wikidata")
```
## Expected results
`datasets` should find the previously-cached dataset.
## Actual results
ConnectionError: Couln't reach the Hugging Face Hub for dataset 'teven/matched_passages_wikidata': Offline mode is enabled
## Environment info
- `datasets` version: 1.16.2.dev0
- Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3547/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/3547/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3546/comments | https://api.github.com/repos/huggingface/datasets/issues/3546/events | https://github.com/huggingface/datasets/pull/3546 | 1,096,367,684 | PR_kwDODunzps4wqYIV | 3,546 | Remove print statements in datasets | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI failures are unrelated to the changes."
] | 2022-01-07T14:30:24 | 2022-01-07T18:09:16 | 2022-01-07T18:09:15 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3546",
"html_url": "https://github.com/huggingface/datasets/pull/3546",
"diff_url": "https://github.com/huggingface/datasets/pull/3546.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3546.patch",
"merged_at": "2022-01-07T18:09:15"
} | This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3546/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3545/comments | https://api.github.com/repos/huggingface/datasets/issues/3545/events | https://github.com/huggingface/datasets/pull/3545 | 1,096,189,889 | PR_kwDODunzps4wpziv | 3,545 | fix: ๐ pass token when retrieving the split names | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Currently, it does not work with https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/common_voice_7_0.py#L146 (which was the goal), because `dl_manager.download_config.use_auth_token` is ignored, and the authentication is required to be use `huggingface-cli login`.\r\nIn my use case (dataset viewer), I'd prefer to use a specific \"User Token Access\", with only the \"read\" role (https://huggingface.co/settings/token).\r\n\r\nSee https://github.com/huggingface/datasets-preview-backend/issues/74#issuecomment-1007316853 for the context",
"> Simply passing download_config is ok :)\r\n\r\nhmm, I prefer only passing use_auth_token. But the question is more: is it correct, in the (convoluted) case if `download_config.use_auth_token` exists and is different from `use_auth_token`? Which one should be used?",
"If both are passed, `use_auth_token` should have the priority (more specific parameters have the higher priority)"
] | 2022-01-07T10:29:22 | 2022-01-10T10:51:47 | 2022-01-10T10:51:46 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3545",
"html_url": "https://github.com/huggingface/datasets/pull/3545",
"diff_url": "https://github.com/huggingface/datasets/pull/3545.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3545.patch",
"merged_at": "2022-01-10T10:51:46"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3545/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3544/comments | https://api.github.com/repos/huggingface/datasets/issues/3544/events | https://github.com/huggingface/datasets/issues/3544 | 1,095,784,681 | I_kwDODunzps5BUFjp | 3,544 | Ability to split a dataset in multiple files. | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2022-01-06T23:02:25 | 2022-01-06T23:02:25 | null | CONTRIBUTOR | null | null | null | Hello,
**Is your feature request related to a problem? Please describe.**
My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset.
I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries.
**Describe the solution you'd like**
I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns.
**Describe alternatives you've considered**
I currently need to
1. Save multiple "versions" of the dataset and load the latest.
2. Try working with cache files to get the latest columns.
**Additional context**
I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box!
I can make a PR myself with some pointers as needed :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3544/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3543/comments | https://api.github.com/repos/huggingface/datasets/issues/3543/events | https://github.com/huggingface/datasets/issues/3543 | 1,095,226,438 | I_kwDODunzps5BR9RG | 3,543 | Allow loading community metrics from the hub, just like datasets | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub/how-to-downstream.md#cached_download))",
"This is a great solution in the meantime, thanks!",
"Here's the code I used, in case it can be of help to someone else:\r\n```python\r\nimport os, shutil\r\nfrom huggingface_hub import hf_hub_download\r\ndef download_metric(repo_id, file_path):\r\n # repo_id: for models \"username/model_name\", for datasets \"datasets/username/model_name\"\r\n local_metric_path = hf_hub_download(repo_id=repo_id, filename=file_path)\r\n updated_local_metric_path = (os.path.dirname(local_metric_path) + os.path.basename(local_metric_path).replace(\".\", \"_\") + \".py\")\r\n shutil.copy(local_metric_path, updated_local_metric_path)\r\n return updated_local_metric_path\r\n\r\nmetric = load_metric(download_metric(REPO_ID, FILE_PATH))\r\n```",
"Solved with https://github.com/huggingface/evaluate ๐ค ",
"Yay!! cc @lvwerra @sashavor @douwekiela \r\n\r\nPlease share your feedback @eladsegal =)"
] | 2022-01-06T11:26:26 | 2022-05-31T20:59:14 | 2022-05-31T20:53:37 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`.
However, there is no option to do it with the metric uploaded to the hub.
This means that if I want to allow other users to use it, they must download it first which makes the usage less smooth.
**Describe the solution you'd like**
Load metrics from the hub just like datasets are loaded.
In order to not break stuff, the convention can be to put the metric file in a "metrics" folder in the hub.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3543/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3543/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3542/comments | https://api.github.com/repos/huggingface/datasets/issues/3542/events | https://github.com/huggingface/datasets/pull/3542 | 1,095,088,485 | PR_kwDODunzps4wmPIP | 3,542 | Update the CC-100 dataset card | {
"login": "aajanki",
"id": 353043,
"node_id": "MDQ6VXNlcjM1MzA0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aajanki",
"html_url": "https://github.com/aajanki",
"followers_url": "https://api.github.com/users/aajanki/followers",
"following_url": "https://api.github.com/users/aajanki/following{/other_user}",
"gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aajanki/subscriptions",
"organizations_url": "https://api.github.com/users/aajanki/orgs",
"repos_url": "https://api.github.com/users/aajanki/repos",
"events_url": "https://api.github.com/users/aajanki/events{/privacy}",
"received_events_url": "https://api.github.com/users/aajanki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-06T08:35:18 | 2022-01-06T18:37:44 | 2022-01-06T18:37:44 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3542",
"html_url": "https://github.com/huggingface/datasets/pull/3542",
"diff_url": "https://github.com/huggingface/datasets/pull/3542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3542.patch",
"merged_at": "2022-01-06T18:37:44"
} | * summary from the dataset homepage
* more details about the data structure
* this dataset does not contain annotations | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3542/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3541/comments | https://api.github.com/repos/huggingface/datasets/issues/3541/events | https://github.com/huggingface/datasets/issues/3541 | 1,095,033,828 | I_kwDODunzps5BROPk | 3,541 | Support 7-zip compressed data files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This should also resolve: https://github.com/huggingface/datasets/issues/3185."
] | 2022-01-06T07:11:03 | 2022-07-19T10:18:30 | null | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
We should support 7-zip compressed data files:
- [x] in `extract`:
- #4672
- [ ] in `iter_archive`: for streaming mode
both in streaming and non-streaming modes.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3541/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3540/comments | https://api.github.com/repos/huggingface/datasets/issues/3540/events | https://github.com/huggingface/datasets/issues/3540 | 1,094,900,336 | I_kwDODunzps5BQtpw | 3,540 | How to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset? | {
"login": "CindyTing",
"id": 35062414,
"node_id": "MDQ6VXNlcjM1MDYyNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/35062414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CindyTing",
"html_url": "https://github.com/CindyTing",
"followers_url": "https://api.github.com/users/CindyTing/followers",
"following_url": "https://api.github.com/users/CindyTing/following{/other_user}",
"gists_url": "https://api.github.com/users/CindyTing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CindyTing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CindyTing/subscriptions",
"organizations_url": "https://api.github.com/users/CindyTing/orgs",
"repos_url": "https://api.github.com/users/CindyTing/repos",
"events_url": "https://api.github.com/users/CindyTing/events{/privacy}",
"received_events_url": "https://api.github.com/users/CindyTing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2022-01-06T02:13:42 | 2022-01-06T02:17:39 | null | NONE | null | null | null | Hi,
I use torch.utils.data.Dataset to define my own data, but I need to use the 'map' function of datasets.arrow_dataset.Dataset later, so I hope to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset.
Here is an example.
```
from torch.utils.data import Dataset
from datasets.arrow_dataset import Dataset as HFDataset
class ADataset(Dataset):
def __init__(self, data):
super().__init__()
self.data = data
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class MDataset():
def __init__(self, tokenizer: AutoTokenizer, data_args, training_args):
self.train_dataset = ADataset(data_args)
self.tokenizer = tokenizer
self.data_args = data_args
self.train_dataset = self.train_dataset.map(
self.process_function,
batched=True,
remove_columns=column_names,
load_from_cache_file=True,
desc="Running tokenizer on train dataset",
)
def process_function(self, examples):
sentences = [" ".join(sample[0][3]) for sample in examples]
tokenized = self.tokenizer(
sentences,
max_length=self.max_seq_len,
padding=self.padding,
truncation=True)
```
But it would raise an ERROR, AttributeError: 'ADataset' object has no attribute 'map'.
so how to convert torch.utils.data.Dataset to datasets.arrow_dataset.Dataset?
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3540/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3539/comments | https://api.github.com/repos/huggingface/datasets/issues/3539/events | https://github.com/huggingface/datasets/pull/3539 | 1,094,813,242 | PR_kwDODunzps4wlXU4 | 3,539 | Research wording for nc licenses | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The CI failure is about some missing tags or sections in the dataset cards, and is unrelated to the part about non commercial use of this PR. Merging"
] | 2022-01-05T23:01:38 | 2022-01-06T18:58:20 | 2022-01-06T18:58:19 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3539",
"html_url": "https://github.com/huggingface/datasets/pull/3539",
"diff_url": "https://github.com/huggingface/datasets/pull/3539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3539.patch",
"merged_at": "2022-01-06T18:58:19"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3539/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3538/comments | https://api.github.com/repos/huggingface/datasets/issues/3538/events | https://github.com/huggingface/datasets/pull/3538 | 1,094,756,755 | PR_kwDODunzps4wlLmD | 3,538 | Readme usage update | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-01-05T21:26:28 | 2022-01-05T23:34:25 | 2022-01-05T23:24:15 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3538",
"html_url": "https://github.com/huggingface/datasets/pull/3538",
"diff_url": "https://github.com/huggingface/datasets/pull/3538.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3538.patch",
"merged_at": "2022-01-05T23:24:15"
} | Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3538/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3537/comments | https://api.github.com/repos/huggingface/datasets/issues/3537/events | https://github.com/huggingface/datasets/pull/3537 | 1,094,738,734 | PR_kwDODunzps4wlH1d | 3,537 | added PII statements and license links to data cards | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-05T20:59:21 | 2022-01-05T22:02:37 | 2022-01-05T22:02:37 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3537",
"html_url": "https://github.com/huggingface/datasets/pull/3537",
"diff_url": "https://github.com/huggingface/datasets/pull/3537.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3537.patch",
"merged_at": "2022-01-05T22:02:37"
} | Updates for the following datacards:
multilingual_librispeech
openslr
speech commands
superb
timit_asr
vctk | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3537/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3536/comments | https://api.github.com/repos/huggingface/datasets/issues/3536/events | https://github.com/huggingface/datasets/pull/3536 | 1,094,645,771 | PR_kwDODunzps4wk0Yb | 3,536 | update `pretty_name` for all datasets | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Pushed the lastest changes!"
] | 2022-01-05T18:45:05 | 2022-07-10T14:36:54 | 2022-01-12T22:59:45 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3536",
"html_url": "https://github.com/huggingface/datasets/pull/3536",
"diff_url": "https://github.com/huggingface/datasets/pull/3536.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3536.patch",
"merged_at": "2022-01-12T22:59:45"
} | This PR updates `pretty_name` for all datasets. Previous PR #3498 had done this for only first 200 datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3536/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3535/comments | https://api.github.com/repos/huggingface/datasets/issues/3535/events | https://github.com/huggingface/datasets/pull/3535 | 1,094,633,214 | PR_kwDODunzps4wkxv0 | 3,535 | Add SVHN dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-05T18:29:09 | 2022-01-12T14:14:35 | 2022-01-12T14:14:35 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3535",
"html_url": "https://github.com/huggingface/datasets/pull/3535",
"diff_url": "https://github.com/huggingface/datasets/pull/3535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3535.patch",
"merged_at": "2022-01-12T14:14:35"
} | Add the SVHN dataset.
Additional notes:
* compared to the TFDS implementation, exposes additional the "full numbers" config
* adds the streaming support for `os.path.splitext` and `scipy.io.loadmat`
* adds `h5py` to the requirements list for the dummy data test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3535/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3534/comments | https://api.github.com/repos/huggingface/datasets/issues/3534/events | https://github.com/huggingface/datasets/pull/3534 | 1,094,352,449 | PR_kwDODunzps4wj3LE | 3,534 | Update wiki_dpr README.md | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-05T13:29:44 | 2022-02-17T13:45:56 | 2022-01-05T14:16:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3534",
"html_url": "https://github.com/huggingface/datasets/pull/3534",
"diff_url": "https://github.com/huggingface/datasets/pull/3534.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3534.patch",
"merged_at": "2022-01-05T14:16:51"
} | Some infos of wiki_dpr were missing as noted in https://github.com/huggingface/datasets/issues/3510, I added them and updated the tags and the examples
Close #3510. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3534/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3533/comments | https://api.github.com/repos/huggingface/datasets/issues/3533/events | https://github.com/huggingface/datasets/issues/3533 | 1,094,156,147 | I_kwDODunzps5BN39z | 3,533 | Task search function on hub not working correctly | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"known issue due to https://github.com/huggingface/datasets/pull/2362 (and [internal](https://github.com/huggingface/moon-landing/issues/946)) , will be solved soon",
"hmm actually i have no recollection of why I said that",
"Because it has dots in some YAML keys, it can't be parsed and indexed by the back-end"
] | 2022-01-05T09:36:30 | 2022-05-12T14:45:57 | null | MEMBER | null | null | null | When I want to look at all datasets of the category: `speech-processing` *i.e.* https://huggingface.co/datasets?task_categories=task_categories:speech-processing&sort=downloads , then the following dataset doesn't show up for some reason:
- https://huggingface.co/datasets/speech_commands
even thought it's task tags seem correct:
https://raw.githubusercontent.com/huggingface/datasets/master/datasets/speech_commands/README.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3533/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3532/comments | https://api.github.com/repos/huggingface/datasets/issues/3532/events | https://github.com/huggingface/datasets/pull/3532 | 1,094,035,066 | PR_kwDODunzps4wi1ft | 3,532 | Give clearer instructions to add the YAML tags | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"this is great, maybe just put all of it in one line?\r\n\r\n> TODO: Add YAML tags here. Copy-paste the tags obtained with the online tagging app: https://huggingface.co/spaces/huggingface/datasets-tagging"
] | 2022-01-05T06:47:52 | 2022-01-17T15:54:37 | 2022-01-17T15:54:36 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3532",
"html_url": "https://github.com/huggingface/datasets/pull/3532",
"diff_url": "https://github.com/huggingface/datasets/pull/3532.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3532.patch",
"merged_at": "2022-01-17T15:54:36"
} | Fix #3531.
CC: @julien-c @VictorSanh | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3532/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3532/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3531/comments | https://api.github.com/repos/huggingface/datasets/issues/3531/events | https://github.com/huggingface/datasets/issues/3531 | 1,094,033,280 | I_kwDODunzps5BNZ-A | 3,531 | Give clearer instructions to add the YAML tags | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-01-05T06:44:20 | 2022-01-17T15:54:36 | 2022-01-17T15:54:36 | MEMBER | null | null | null | ## Describe the bug
As reported by @julien-c, many community datasets contain the line `YAML tags:` at the top of the YAML section in the header of the README file. See e.g.: https://huggingface.co/datasets/bigscience/P3/commit/a03bea08cf4d58f268b469593069af6aeb15de32
Maybe we should give clearer instruction/hints in the README template.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3531/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3530/comments | https://api.github.com/repos/huggingface/datasets/issues/3530/events | https://github.com/huggingface/datasets/pull/3530 | 1,093,894,732 | PR_kwDODunzps4wiZCw | 3,530 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-01-05T01:32:07 | 2022-01-05T12:50:51 | 2022-01-05T12:50:50 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3530",
"html_url": "https://github.com/huggingface/datasets/pull/3530",
"diff_url": "https://github.com/huggingface/datasets/pull/3530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3530.patch",
"merged_at": "2022-01-05T12:50:50"
} | Removing reference to "Common Voice" in Personal and Sensitive Information section.
Adding link to license.
Correct license type in metadata. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3530/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3529/comments | https://api.github.com/repos/huggingface/datasets/issues/3529/events | https://github.com/huggingface/datasets/pull/3529 | 1,093,846,356 | PR_kwDODunzps4wiPA9 | 3,529 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-01-04T23:52:47 | 2022-01-05T12:50:15 | 2022-01-05T12:50:14 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3529",
"html_url": "https://github.com/huggingface/datasets/pull/3529",
"diff_url": "https://github.com/huggingface/datasets/pull/3529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3529.patch",
"merged_at": "2022-01-05T12:50:14"
} | Updating licensing information & personal and sensitive information. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3529/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3528/comments | https://api.github.com/repos/huggingface/datasets/issues/3528/events | https://github.com/huggingface/datasets/pull/3528 | 1,093,844,616 | PR_kwDODunzps4wiOqH | 3,528 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-01-04T23:48:11 | 2022-01-05T12:49:41 | 2022-01-05T12:49:40 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3528",
"html_url": "https://github.com/huggingface/datasets/pull/3528",
"diff_url": "https://github.com/huggingface/datasets/pull/3528.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3528.patch",
"merged_at": "2022-01-05T12:49:40"
} | Updating license with appropriate capitalization & a link.
Updating Personal and Sensitive Information to address PII concern. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3528/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3527/comments | https://api.github.com/repos/huggingface/datasets/issues/3527/events | https://github.com/huggingface/datasets/pull/3527 | 1,093,840,707 | PR_kwDODunzps4wiN1w | 3,527 | Update README.md | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-01-04T23:39:41 | 2022-01-05T00:23:50 | 2022-01-05T00:23:50 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3527",
"html_url": "https://github.com/huggingface/datasets/pull/3527",
"diff_url": "https://github.com/huggingface/datasets/pull/3527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3527.patch",
"merged_at": "2022-01-05T00:23:50"
} | Adding licensing information. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3527/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3526/comments | https://api.github.com/repos/huggingface/datasets/issues/3526/events | https://github.com/huggingface/datasets/pull/3526 | 1,093,833,446 | PR_kwDODunzps4wiMaQ | 3,526 | Update license to bookcorpus dataset card | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"The smashwords ToS apply for this dataset, we did the same for https://github.com/huggingface/datasets/pull/3525",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-01-04T23:25:23 | 2022-09-30T10:23:38 | 2022-09-30T10:21:20 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3526",
"html_url": "https://github.com/huggingface/datasets/pull/3526",
"diff_url": "https://github.com/huggingface/datasets/pull/3526.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3526.patch",
"merged_at": "2022-09-30T10:21:20"
} | Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3526/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3525/comments | https://api.github.com/repos/huggingface/datasets/issues/3525/events | https://github.com/huggingface/datasets/pull/3525 | 1,093,831,268 | PR_kwDODunzps4wiL8p | 3,525 | Adding license information for Openbookcorpus | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their personal enjoyment in any reasonable non-commercial manner in compliance with copyright law\" and the smashwords end-users agreement.\r\n\r\nIt should be the same for https://github.com/huggingface/datasets/pull/3526 as well",
"May I merge this one ?",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-01-04T23:20:36 | 2022-04-20T09:54:30 | 2022-04-20T09:48:10 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3525",
"html_url": "https://github.com/huggingface/datasets/pull/3525",
"diff_url": "https://github.com/huggingface/datasets/pull/3525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3525.patch",
"merged_at": "2022-04-20T09:48:10"
} | Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3525/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3524/comments | https://api.github.com/repos/huggingface/datasets/issues/3524/events | https://github.com/huggingface/datasets/pull/3524 | 1,093,826,723 | PR_kwDODunzps4wiK_v | 3,524 | Adding link to license. | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2022-01-04T23:11:48 | 2022-01-05T12:31:38 | 2022-01-05T12:31:37 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3524",
"html_url": "https://github.com/huggingface/datasets/pull/3524",
"diff_url": "https://github.com/huggingface/datasets/pull/3524.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3524.patch",
"merged_at": "2022-01-05T12:31:37"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3524/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3523/comments | https://api.github.com/repos/huggingface/datasets/issues/3523/events | https://github.com/huggingface/datasets/pull/3523 | 1,093,819,227 | PR_kwDODunzps4wiJc2 | 3,523 | Added links to licensing and PII message in vctk dataset | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-04T22:56:58 | 2022-01-06T19:33:50 | 2022-01-06T19:33:50 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3523",
"html_url": "https://github.com/huggingface/datasets/pull/3523",
"diff_url": "https://github.com/huggingface/datasets/pull/3523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3523.patch",
"merged_at": "2022-01-06T19:33:50"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3523/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3522/comments | https://api.github.com/repos/huggingface/datasets/issues/3522/events | https://github.com/huggingface/datasets/issues/3522 | 1,093,807,586 | I_kwDODunzps5BMi3i | 3,522 | wmt19 is broken (zh-en) | {
"login": "AjayP13",
"id": 5404177,
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AjayP13",
"html_url": "https://github.com/AjayP13",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"This issue is not reproducible."
] | 2022-01-04T22:33:45 | 2022-05-06T16:27:37 | 2022-05-06T16:27:37 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wmt19", 'zh-en')
```
## Expected results
The dataset should download.
## Actual results
`ConnectionError: Couldn't reach ftp://cwmt-wmt:[email protected]/parallel/casia2015.zip`
## Environment info
- `datasets` version: 1.15.1
- Platform: Linux
- Python version: 3.8
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3522/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3521/comments | https://api.github.com/repos/huggingface/datasets/issues/3521/events | https://github.com/huggingface/datasets/pull/3521 | 1,093,797,947 | PR_kwDODunzps4wiFCs | 3,521 | Vivos license update | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-04T22:17:47 | 2022-01-04T22:18:16 | 2022-01-04T22:18:16 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3521",
"html_url": "https://github.com/huggingface/datasets/pull/3521",
"diff_url": "https://github.com/huggingface/datasets/pull/3521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3521.patch",
"merged_at": null
} | Updated the license information with the link to the license text | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3521/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3520/comments | https://api.github.com/repos/huggingface/datasets/issues/3520/events | https://github.com/huggingface/datasets/pull/3520 | 1,093,747,753 | PR_kwDODunzps4wh6oD | 3,520 | Audio datacard update - first pass | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "meg-huggingface",
"id": 90473723,
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meg-huggingface",
"html_url": "https://github.com/meg-huggingface",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I'm not sure that we want to change the tags at the top of the cards by hand. Those are used to create the tags in the hub. Although looking at all the tags now, we might want to normalize the current tags again (hyphens or no, \".0\" or no). Maybe we could add a binary tag for public domain or not?",
"> \r\n\r\nThat's a good point, I didn't realize these were auto-populated.\r\nAt the same time, some of them are wrong -- how/where are they auto-populated? Seems like we should fix it at that source for the future.\r\nIn the mean time, I see that \"cc0-1.0\" is the desired tag for public domain, so I will change that for now."
] | 2022-01-04T20:58:25 | 2022-01-05T12:30:21 | 2022-01-05T12:30:20 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3520",
"html_url": "https://github.com/huggingface/datasets/pull/3520",
"diff_url": "https://github.com/huggingface/datasets/pull/3520.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3520.patch",
"merged_at": "2022-01-05T12:30:20"
} | Filling out data card "Personal and Sensitive Information" for speech datasets to note PII concerns | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3520/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3519/comments | https://api.github.com/repos/huggingface/datasets/issues/3519/events | https://github.com/huggingface/datasets/pull/3519 | 1,093,655,205 | PR_kwDODunzps4whnXH | 3,519 | CC100: Using HTTPS for the data source URL fixes load_dataset() | {
"login": "aajanki",
"id": 353043,
"node_id": "MDQ6VXNlcjM1MzA0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/353043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aajanki",
"html_url": "https://github.com/aajanki",
"followers_url": "https://api.github.com/users/aajanki/followers",
"following_url": "https://api.github.com/users/aajanki/following{/other_user}",
"gists_url": "https://api.github.com/users/aajanki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aajanki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aajanki/subscriptions",
"organizations_url": "https://api.github.com/users/aajanki/orgs",
"repos_url": "https://api.github.com/users/aajanki/repos",
"events_url": "https://api.github.com/users/aajanki/events{/privacy}",
"received_events_url": "https://api.github.com/users/aajanki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-04T18:45:54 | 2022-01-05T17:28:34 | 2022-01-05T17:28:34 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3519",
"html_url": "https://github.com/huggingface/datasets/pull/3519",
"diff_url": "https://github.com/huggingface/datasets/pull/3519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3519.patch",
"merged_at": "2022-01-05T17:28:34"
} | Without this change the following script (with any lang parameter) consistently fails. After changing to the HTTPS URL, the script works as expected.
```python
from datasets import load_dataset
dataset = load_dataset("cc100", lang="en")
```
This is the error produced by the previous script:
```sh
Using custom data configuration en-lang=en
Downloading and preparing dataset cc100/en to /home/antti/.cache/huggingface/datasets/cc100/en-lang=en/0.0.0/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b...
Traceback (most recent call last):
File "/home/antti/tmp/cc100/cc100.py", line 3, in <module>
dataset = load_dataset("cc100", lang="en")
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/load.py", line 1694, in load_dataset
builder_instance.download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 595, in download_and_prepare
self._download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/antti/.cache/huggingface/modules/datasets_modules/datasets/cc100/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b/cc100.py", line 117, in _split_generators
path = dl_manager.download_and_extract(download_url)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 308, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 251, in map_nested
return function(data_struct)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach http://data.statmt.org/cc-100/en.txt.xz (error 503)
```
Note that I get the same behavior also using curl on the command line. The plain HTTP "curl -L http://data.statmt.org/cc-100/en.txt.xz" fails with "503 Service unavailable", but the with the HTTPS version of the URL curl starts downloading the file.
My guess is that the server does overly aggressive rate-limitting. When a client requests an HTTP URL, it (sensibly) gets redirected to the HTTPS equivalent, but now the server notices two requests coming from the same client (the original HTTP and the redirected HTTPS) during a brief time windows and rate-limitter kicks in and blocks the second request! If the client initally uses the HTTPS URL there's only one incoming request which the rate-limitter allows. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3519/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3518/comments | https://api.github.com/repos/huggingface/datasets/issues/3518/events | https://github.com/huggingface/datasets/issues/3518 | 1,093,063,455 | I_kwDODunzps5BJtMf | 3,518 | Add PubMed Central Open Access dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"In the framework of BigScience:\r\n- bigscience-workshop/data_tooling#121\r\n\r\nI have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access\r\n\r\nHowever, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or `pmc`\r\nThis way, we could add other datasets I'm also working on: Author Manuscript Dataset, Historical OCR Dataset, LitArch Open Access Subset.\r\n\r\nWhat do you think? @lhoestq @mariosasko ",
"Why not ! Having them under such namespaces would also help people searching for this kind of datasets.\r\nWe can also invite people from pubmed at one point",
"DONE: https://huggingface.co/datasets/pmc/open_access"
] | 2022-01-04T06:54:35 | 2022-01-17T15:25:57 | 2022-01-17T15:25:57 | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** PubMed Central Open Access
- **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse.
- **Paper:** *link to the dataset paper if available*
- **Data:** https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3518/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3517/comments | https://api.github.com/repos/huggingface/datasets/issues/3517/events | https://github.com/huggingface/datasets/pull/3517 | 1,092,726,651 | PR_kwDODunzps4wemwU | 3,517 | Add CPPE-5 dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks so much, @mariosasko and @lhoestq , much appreciated!"
] | 2022-01-03T18:31:20 | 2022-01-19T02:23:37 | 2022-01-05T18:53:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3517",
"html_url": "https://github.com/huggingface/datasets/pull/3517",
"diff_url": "https://github.com/huggingface/datasets/pull/3517.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3517.patch",
"merged_at": "2022-01-05T18:53:02"
} | Adds the recently released CPPE-5 dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3517/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3516/comments | https://api.github.com/repos/huggingface/datasets/issues/3516/events | https://github.com/huggingface/datasets/pull/3516 | 1,092,657,738 | PR_kwDODunzps4weYhE | 3,516 | dataset `asset` - change to raw.githubusercontent.com URLs | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-03T16:43:57 | 2022-01-03T17:39:02 | 2022-01-03T17:39:01 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3516",
"html_url": "https://github.com/huggingface/datasets/pull/3516",
"diff_url": "https://github.com/huggingface/datasets/pull/3516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3516.patch",
"merged_at": "2022-01-03T17:39:01"
} | Changed the URLs to the ones it was automatically re-directing.
Before, the download was failing | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3516/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3515/comments | https://api.github.com/repos/huggingface/datasets/issues/3515/events | https://github.com/huggingface/datasets/issues/3515 | 1,092,624,695 | I_kwDODunzps5BICE3 | 3,515 | `ExpectedMoreDownloadedFiles` for `evidence_infer_treatment` | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @VictorSanh.\r\n\r\nI'm looking at it... "
] | 2022-01-03T15:58:38 | 2022-02-14T13:21:43 | 2022-02-14T13:21:43 | MEMBER | null | null | null | ## Describe the bug
I am trying to load a dataset called `evidence_infer_treatment`. The first subset (`1.1`) works fine but the second returns an error (`2.0`). It downloads a file but crashes during the checksums.
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("evidence_infer_treatment", "2.0")
Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset
use_auth_token=use_auth_token,
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 664, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 33, in verify_checksums
raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'http://evidence-inference.ebm-nlp.com/v2.0.tar.gz'}
```
I did try to pass the argument `ignore_verifications=True` but run into an error when trying to build the dataset:
```python
>>> load_dataset("evidence_infer_treatment", "2.0", ignore_verifications=True, download_mode="force_redownload")
Downloading and preparing dataset evidence_infer_treatment/2.0 (download: 34.84 MiB, generated: 91.46 MiB, post-processed: Unknown size, total: 126.30 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/evidence_infer_treatment/2.0/2.0.0/6812655bfd26cbaa58c84eab098bf6403694b06c6ae2ded603c55681868a1e24...
Downloading: 164MB [00:23, 6.98MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py", line 1669, in load_dataset
use_auth_token=use_auth_token,
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 594, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 681, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py", line 1080, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 1032, in encode_example
return encode_nested_example(self, example)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in encode_nested_example
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 807, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in encode_nested_example
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 829, in <listcomp>
list_dict[k] = [encode_nested_example(dict_tuples[0], o) for o in dict_tuples[1:]]
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/features/features.py", line 828, in encode_nested_example
for k, dict_tuples in utils.zip_dict(schema.feature, *obj):
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 136, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: ''
```
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3515/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3514/comments | https://api.github.com/repos/huggingface/datasets/issues/3514/events | https://github.com/huggingface/datasets/pull/3514 | 1,092,606,383 | PR_kwDODunzps4weN9W | 3,514 | Fix to_tf_dataset references in docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The code snippet in [this section](https://huggingface.co/docs/datasets/master/use_dataset.html?highlight=to_tf_dataset#tensorflow) is missing an import (`DataCollatorWithPadding`) and doesn't initialize the TF model before the `model.fit` call."
] | 2022-01-03T15:31:39 | 2022-01-05T18:52:48 | 2022-01-05T18:52:48 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3514",
"html_url": "https://github.com/huggingface/datasets/pull/3514",
"diff_url": "https://github.com/huggingface/datasets/pull/3514.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3514.patch",
"merged_at": "2022-01-05T18:52:47"
} | Fix the `to_tf_dataset` references in the docs. The currently failing example of usage will be fixed by #3338. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3514/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3513/comments | https://api.github.com/repos/huggingface/datasets/issues/3513/events | https://github.com/huggingface/datasets/pull/3513 | 1,092,569,802 | PR_kwDODunzps4weGWl | 3,513 | Add desc parameter to filter | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2022-01-03T14:44:18 | 2022-01-05T18:31:25 | 2022-01-05T18:31:25 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3513",
"html_url": "https://github.com/huggingface/datasets/pull/3513",
"diff_url": "https://github.com/huggingface/datasets/pull/3513.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3513.patch",
"merged_at": "2022-01-05T18:31:24"
} | Fix #3317 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3513/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3512/comments | https://api.github.com/repos/huggingface/datasets/issues/3512/events | https://github.com/huggingface/datasets/issues/3512 | 1,092,359,973 | I_kwDODunzps5BHBcl | 3,512 | No Data format found | {
"login": "shazzad47",
"id": 57741378,
"node_id": "MDQ6VXNlcjU3NzQxMzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57741378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shazzad47",
"html_url": "https://github.com/shazzad47",
"followers_url": "https://api.github.com/users/shazzad47/followers",
"following_url": "https://api.github.com/users/shazzad47/following{/other_user}",
"gists_url": "https://api.github.com/users/shazzad47/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shazzad47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shazzad47/subscriptions",
"organizations_url": "https://api.github.com/users/shazzad47/orgs",
"repos_url": "https://api.github.com/users/shazzad47/repos",
"events_url": "https://api.github.com/users/shazzad47/events{/privacy}",
"received_events_url": "https://api.github.com/users/shazzad47/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi, which dataset is giving you an error?"
] | 2022-01-03T09:41:11 | 2022-01-17T13:26:05 | 2022-01-17T13:26:05 | NONE | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3512/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3511/comments | https://api.github.com/repos/huggingface/datasets/issues/3511/events | https://github.com/huggingface/datasets/issues/3511 | 1,092,170,411 | I_kwDODunzps5BGTKr | 3,511 | Dataset | {
"login": "MIKURI0114",
"id": 92849978,
"node_id": "U_kgDOBYjHOg",
"avatar_url": "https://avatars.githubusercontent.com/u/92849978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MIKURI0114",
"html_url": "https://github.com/MIKURI0114",
"followers_url": "https://api.github.com/users/MIKURI0114/followers",
"following_url": "https://api.github.com/users/MIKURI0114/following{/other_user}",
"gists_url": "https://api.github.com/users/MIKURI0114/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MIKURI0114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MIKURI0114/subscriptions",
"organizations_url": "https://api.github.com/users/MIKURI0114/orgs",
"repos_url": "https://api.github.com/users/MIKURI0114/repos",
"events_url": "https://api.github.com/users/MIKURI0114/events{/privacy}",
"received_events_url": "https://api.github.com/users/MIKURI0114/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Can you reopen with the correct dataset name (if relevant)?\r\n\r\nThanks",
"The dataset viewer was down tonight. It works again."
] | 2022-01-03T02:03:23 | 2022-01-03T08:41:26 | 2022-01-03T08:23:07 | NONE | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3511/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3510/comments | https://api.github.com/repos/huggingface/datasets/issues/3510/events | https://github.com/huggingface/datasets/issues/3510 | 1,091,997,004 | I_kwDODunzps5BFo1M | 3,510 | `wiki_dpr` details for Open Domain Question Answering tasks | {
"login": "pk1130",
"id": 40918514,
"node_id": "MDQ6VXNlcjQwOTE4NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/40918514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pk1130",
"html_url": "https://github.com/pk1130",
"followers_url": "https://api.github.com/users/pk1130/followers",
"following_url": "https://api.github.com/users/pk1130/following{/other_user}",
"gists_url": "https://api.github.com/users/pk1130/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pk1130/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pk1130/subscriptions",
"organizations_url": "https://api.github.com/users/pk1130/orgs",
"repos_url": "https://api.github.com/users/pk1130/repos",
"events_url": "https://api.github.com/users/pk1130/events{/privacy}",
"received_events_url": "https://api.github.com/users/pk1130/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector).",
"Closed by:\r\n- #3534"
] | 2022-01-02T11:04:01 | 2022-02-17T13:46:20 | 2022-02-17T13:46:20 | NONE | null | null | null | Hey guys!
Thanks for creating the `wiki_dpr` dataset!
I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regarding the same! Thanks a ton!
P.S.: (If one of @thomwolf @lewtun @lhoestq could respond, that would be even better since they have the first-hand details of the dataset. If anyone else has those, please reach out! Thanks!) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3510/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3507/comments | https://api.github.com/repos/huggingface/datasets/issues/3507/events | https://github.com/huggingface/datasets/issues/3507 | 1,091,214,808 | I_kwDODunzps5BCp3Y | 3,507 | Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | null | [] | null | [
"IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nI don't really have an opinion regarding the JSON metadata as I don't know enough about it.\r\n\r\n",
"I don't know all the details, but generally I'd be in favor of unifying the metadata formats into YAML inside .md (and so deprecating the dataset_infos.json) \r\n\r\n(Ultimately the CI can run on \"HuggingFace Actions\" instead of on GitHub)",
"The dataset_infos.json file currently has these useful infos for each dataset configuration, that I think can be moved to the dataset tags:\r\n- Size of the dataset in MB: download size, arrow file size, and total size (sum of download + arrow)\r\n- Size of each split in MB and number of examples. Again this can be moved to the dataset tags\r\n- Feature type of each column\r\n- supported task templates (it defines what columns correspond to the features and labels for example)\r\n\r\nBut it also has this, which I'm not sure if it should be in the tags or not:\r\n- Checksums of the downloaded files for integrity verifications\r\n\r\nSo ultimately this file could probably be deprecated in favor of having the infos in the tags.\r\n\r\n> Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).\r\n\r\nTo get the exact number of examples and size in MB of the dataset, one needs to download and generate it completely. IMO these infos are very important when someone considers using a dataset. Though using streaming we could do some extrapolation to have approximate values instead.\r\n\r\nFor the integrity verifications we also need the number of examples and the checksums of the downloaded files, so it requires the dataset to be fully downloaded once. This can be optional though.\r\n\r\n> IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work)\r\n\r\nI agree with this. Usually if a dataset works in streaming mode, then it works in non-streaming mode (the other way around is not true though).\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data.\r\n\r\nYes indeed, or at least make sure that it was tested on the true data.",
"(note that if we wanted to display sizes, etc we could also pretty easily parse the `dataset_infos.json` on the hub side)",
"I agree that we can move the relevant parts of `dataset_infos.json` to the YAML tags.\r\n\r\n> On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data. <\r\n> > Yes indeed, or at least make sure that it was tested on the true data.\r\n\r\nI like the idea of testing streaming and falling back to the dummy data test if streaming does not work. Generating dummy data can be very tedious, so this would be a nice incentive for the contributors to make their datasets streamable. ",
"CC: @severo ",
"About dummy data, please see e.g. this PR: https://github.com/huggingface/datasets/pull/3692/commits/62368daac0672041524a471386d5e78005cf357a\r\n- I updated the previous dummy data: I just had to rename the file and its directory\r\n - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz`\r\n\r\nThen I discover it fails: https://app.circleci.com/pipelines/github/huggingface/datasets/9800/workflows/173a4433-8feb-4fc6-ab9e-59762084e3e1/jobs/60437\r\n```\r\nNo such file or directory: '.../dummy_data/pubmed22n0002.xml.gz'\r\n```\r\n- it needs dummy data for all the 1114 files: \r\n `_URLs = [f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1115)]`\r\n- this confirms me that it never passed the test: these dummy data files were not present before my PR\r\n- therefore, is it really useful the data test if we just ignore it when it does not pass?\r\n\r\nIn relation with JSON metadata, I'm generating the file for `pubmed` (see above) in a GCP instance: it's running for more than 3 hours and only 9 million examples generated so far (before my PR, it had 32 million, now it has more).",
"I mention in https://github.com/huggingface/datasets-server/wiki/Preliminary-design that the future \"datasets server\" could be in charge of generating both the dummy data and the dataset-info.json file if required (or their equivalent).",
"Hi ! I think dummy data generation is out of scope for the datasets server, since it's about generating the original data files.\r\n\r\nThat would be amazing to have it generate the dataset_infos.json though !",
"From some offline discussion with @mariosasko and especially for vision datasets, we'll probably not require dummy data anymore and use streaming instead :) This will make adding a new dataset much easier.\r\nThis should also make sure that streaming works as expected directly in the CI, without having to check the dataset viewer once the PR is merged",
"OK. I removed the \"dummy data\" item from the services of the dataset server",
"It seems that migration from dataset-info.json to dataset card YAML has been acted.\r\n\r\nProbably it's a good idea, but I didn't find the pros and cons of this decision, so I put some I could think of:\r\n\r\npros:\r\n- only one file to parse, share, sync\r\n- it gives a hint to the users that if you write your dataset card, you should also specify the metadata\r\n\r\ncons:\r\n- the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n- YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n- two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n- [low priority] besides the JSON file, we might want to support yaml or toml file if the user prefers (as [prettier](https://prettier.io/docs/en/configuration.html) and others do for their config files, for example). Inside the md, I understand that only YAML is allowed",
"> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nNote that we could simply not have the checksums in the YAML metadata at all, or maybe at one point have a pointer to another file instead.\r\n\r\nWe can also choose to hide (collapse) certain sections in the YAML by default when we open the dataset card editor.\r\n\r\n> two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file.\r\n\r\nI think it's fine for now. Later if we really end up with too many YAML sections we can see if we need to tweak the API endpoints or the `datasets`/`huggingface_hub` tools\r\n\r\n> YAML vs JSON: not sure which one is easier for users to fill and maintain\r\n\r\nRegarding YAML vs JSON: I think YAML is easier to write by hand, and I also think that it's better for consistency - i.e. we're using more and more YAML to configure models/datasets/spaces",
"I didn't know the decision was already taken. Good to know. ๐
",
"> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums\r\n\r\nWe can definitely work on this on the hub side to make the UX better",
"Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets (see [here](https://www.tensorflow.org/datasets/community_catalog/huggingface)).\r\n\r\nFYI I noticed today that they are using the exported dataset_infos.json files from github to get the metadata (see their code [here](https://github.com/tensorflow/datasets/blob/a482f01c036a10496f5e22e69a2ef81b707cc418/tensorflow_datasets/scripts/documentation/build_community_catalog.py#L261))",
"Metadata is now stored as YAML, and dummy data is deprecated, so I think we can close this issue."
] | 2021-12-30T17:04:25 | 2022-11-04T15:31:38 | 2022-11-04T15:31:37 | MEMBER | null | null | null | I open this PR to have a public discussion about this topic and make a decision.
As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)?
On the other hand, the dummy data is necessary for testing (in our CI suite) that the canonical dataset loads correctly. However:
- the dataset preview feature is already an indirect test that the dataset loads correctly (it also tests it is streamable though)
- we are migrating canonical datasets to the Hub
Do we really need to continue testing them in out CI?
Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).
Feel free to ping other people for the discussion.
CC: @lhoestq @mariosasko @thomwolf @julien-c @patrickvonplaten @anton-l @LysandreJik @yjernite @nateraw | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3507/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3506/comments | https://api.github.com/repos/huggingface/datasets/issues/3506/events | https://github.com/huggingface/datasets/pull/3506 | 1,091,166,595 | PR_kwDODunzps4wZpot | 3,506 | Allows DatasetDict.filter to have batching option | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-30T15:22:22 | 2022-01-04T10:24:28 | 2022-01-04T10:24:27 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3506",
"html_url": "https://github.com/huggingface/datasets/pull/3506",
"diff_url": "https://github.com/huggingface/datasets/pull/3506.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3506.patch",
"merged_at": "2022-01-04T10:24:27"
} | - Related to: #3244
- Fixes: #3503
We extends `.filter( ... batched: bool)` support to DatasetDict. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3506/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3505/comments | https://api.github.com/repos/huggingface/datasets/issues/3505/events | https://github.com/huggingface/datasets/issues/3505 | 1,091,150,820 | I_kwDODunzps5BCaPk | 3,505 | cast_column function not working with map function in streaming mode for Audio features | {
"login": "ashu5644",
"id": 8268102,
"node_id": "MDQ6VXNlcjgyNjgxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashu5644",
"html_url": "https://github.com/ashu5644",
"followers_url": "https://api.github.com/users/ashu5644/followers",
"following_url": "https://api.github.com/users/ashu5644/following{/other_user}",
"gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions",
"organizations_url": "https://api.github.com/users/ashu5644/orgs",
"repos_url": "https://api.github.com/users/ashu5644/repos",
"events_url": "https://api.github.com/users/ashu5644/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashu5644/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)."
] | 2021-12-30T14:52:01 | 2022-01-18T19:54:07 | 2022-01-18T19:54:07 | NONE | null | null | null | ## Describe the bug
I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only.
I am getting features of 'audio' of string type with load_dataset call. After using cast_column 'audio' feature is converted into 'Audio' type. But in map function I am not able to get Audio type for audio feature & getting string type data containing path of file only. So I am not able to use processor in encode function.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset, Audio
from transformers import Wav2Vec2Processor
def encode(batch, processor):
print("Audio: ",batch['audio'])
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
return batch
def print_ds(ds):
iterator = iter(ds)
for d in iterator:
print("Data: ",d)
break
processor = Wav2Vec2Processor.from_pretrained(pretrained_model_path)
dataset = load_dataset("custom_dataset.py","train",data_files={'train':'train_path.txt'},
data_dir="data", streaming=True, split="train")
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.map(lambda x: encode(x,processor))
print("Features: ",dataset.features)
print_ds(dataset)
```
## Expected results
map function not printing Audio type features be used with processor function and getting error in processor call due to this.
## Actual results
# after load_dataset call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Value(dtype='string', id=None)}
Data: {'sentence': 'เคเคฐ เค
เคชเคจเฅ เคชเฅเค เคเฅ เคฎเคพเค เคเฅ เคธเฅเคตเคพเคฆเคฟเคทเฅเค เคเคฐเคฎเคเคฐเคฎ เคเคฒเฅเคฌเคฟเคฏเคพเค เคนเฅเคชเคคเฅ\n', 'audio': 'data/0116_003.wav'}
# after cast_column call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None)}
Data: {'sentence': 'เคเคฐ เค
เคชเคจเฅ เคชเฅเค เคเฅ เคฎเคพเค เคเฅ เคธเฅเคตเคพเคฆเคฟเคทเฅเค เคเคฐเคฎเคเคฐเคฎ เคเคฒเฅเคฌเคฟเคฏเคพเค เคนเฅเคชเคคเฅ\n', 'audio': {'path': 'data/0116_003.wav', 'array': array([ 1.2662281e-06, 1.0264218e-06, -1.3615092e-06, ...,
1.3017889e-02, 1.0085563e-02, 4.8155054e-03], dtype=float32), 'sampling_rate': 16000}}
# after map call
Features: None
Audio: data/0116_003.wav
Traceback (most recent call last):
File "demo2.py", line 36, in <module>
print_ds(dataset)
File "demo2.py", line 11, in print_ds
for d in iterator:
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "demo2.py", line 32, in <lambda>
dataset = dataset.map(lambda x: batch_encode(x,processor))
File "demo2.py", line 6, in batch_encode
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
TypeError: string indices must be integers
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-4.14.243 with-debian-bullseye-sid
- Python version: 3.7.9
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3505/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3504/comments | https://api.github.com/repos/huggingface/datasets/issues/3504/events | https://github.com/huggingface/datasets/issues/3504 | 1,090,682,230 | I_kwDODunzps5BAn12 | 3,504 | Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst | {
"login": "ToddMorrill",
"id": 12600692,
"node_id": "MDQ6VXNlcjEyNjAwNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ToddMorrill",
"html_url": "https://github.com/ToddMorrill",
"followers_url": "https://api.github.com/users/ToddMorrill/followers",
"following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}",
"gists_url": "https://api.github.com/users/ToddMorrill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ToddMorrill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ToddMorrill/subscriptions",
"organizations_url": "https://api.github.com/users/ToddMorrill/orgs",
"repos_url": "https://api.github.com/users/ToddMorrill/repos",
"events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}",
"received_events_url": "https://api.github.com/users/ToddMorrill/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @ToddMorrill, thanks for reporting.\r\n\r\nThree weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu\r\n\r\nThey told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have their data back online asap.",
"Hi @ToddMorrill, people from the Pile team have mirrored their data in a new host server: https://mystic.the-eye.eu\r\n\r\nSee:\r\n- #3627\r\n\r\nIt should work if you update your URL.\r\n\r\nWe should also update the URL in our course material.",
"The old URL is still present in the HuggingFace course here: \r\nhttps://huggingface.co/course/chapter5/4?fw=pt\r\n\r\nI have created a PR for the Notebook here: https://github.com/huggingface/notebooks/pull/148\r\nNot sure if the HTML is in a public repo. I wasn't able to find it. ",
"Fixed the other two URLs here: \r\nhttps://github.com/mwunderlich/notebooks/pull/1"
] | 2021-12-29T18:23:20 | 2022-02-18T07:49:00 | 2022-02-17T15:04:25 | NONE | null | null | null | ## Describe the bug
I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt).
https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
I also tried with `wget` as follows.
```
wget https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst
```
## Expected results
I expect to be able to download this file.
## Actual results
Traceback
```
---------------------------------------------------------------------------
timeout Traceback (most recent call last)
/usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self)
158 try:
--> 159 conn = connection.create_connection(
160 (self._dns_host, self.port), self.timeout, **extra_kw
/usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
/usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
timeout: timed out
During handling of the above exception, another exception occurred:
ConnectTimeoutError Traceback (most recent call last)
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
664 # Make the request on the httplib connection object.
--> 665 httplib_response = self._make_request(
666 conn,
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
375 try:
--> 376 self._validate_conn(conn)
377 except (SocketTimeout, BaseSSLError) as e:
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
995 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 996 conn.connect()
997
/usr/lib/python3/dist-packages/urllib3/connection.py in connect(self)
313 # Add certificate verification
--> 314 conn = self._new_conn()
315 hostname = self.host
/usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self)
163 except SocketTimeout:
--> 164 raise ConnectTimeoutError(
165 self,
ConnectTimeoutError: (<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
/usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
438 if not chunked:
--> 439 resp = conn.urlopen(
440 method=request.method,
/usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
718
--> 719 retries = retries.increment(
720 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
/usr/lib/python3/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
435 if new_retry.is_exhausted():
--> 436 raise MaxRetryError(_pool, url, error or ResponseError(cause))
437
MaxRetryError: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)'))
During handling of the above exception, another exception occurred:
ConnectTimeout Traceback (most recent call last)
/tmp/ipykernel_15104/606583593.py in <module>
3 # This takes a few minutes to run, so go grab a tea or coffee while you wait :)
4 data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
----> 5 pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
6 pubmed_dataset
~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1655
1656 # Create a dataset builder
-> 1657 builder_instance = load_dataset_builder(
1658 path=path,
1659 name=name,
~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1492 download_config = download_config.copy() if download_config else DownloadConfig()
1493 download_config.use_auth_token = use_auth_token
-> 1494 dataset_module = dataset_module_factory(
1495 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1496 )
~/.local/lib/python3.8/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1116 # Try packaged
1117 if path in _PACKAGED_DATASETS_MODULES:
-> 1118 return PackagedDatasetModuleFactory(
1119 path, data_files=data_files, download_config=download_config, download_mode=download_mode
1120 ).get_module()
~/.local/lib/python3.8/site-packages/datasets/load.py in get_module(self)
773 else get_patterns_locally(str(Path().resolve()))
774 )
--> 775 data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token)
776 module_path, hash = _PACKAGED_DATASETS_MODULES[self.name]
777 builder_kwargs = {"hash": hash, "data_files": data_files}
~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
576 for key, patterns_for_key in patterns.items():
577 out[key] = (
--> 578 DataFilesList.from_local_or_remote(
579 patterns_for_key,
580 base_path=base_path,
~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)
545 base_path = base_path if base_path is not None else str(Path().resolve())
546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
--> 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)
548 return cls(data_files, origin_metadata)
549
~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_origin_metadata_locally_or_by_urls(data_files, max_workers, use_auth_token)
492 data_files: List[Union[Path, Url]], max_workers=64, use_auth_token: Optional[Union[bool, str]] = None
493 ) -> Tuple[str]:
--> 494 return thread_map(
495 partial(_get_single_origin_metadata_locally_or_by_urls, use_auth_token=use_auth_token),
496 data_files,
~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in thread_map(fn, *iterables, **tqdm_kwargs)
92 """
93 from concurrent.futures import ThreadPoolExecutor
---> 94 return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
95
96
~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in _executor_map(PoolExecutor, fn, *iterables, **tqdm_kwargs)
74 map_args.update(chunksize=chunksize)
75 with PoolExecutor(**pool_kwargs) as ex:
---> 76 return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
77
78
~/.local/lib/python3.8/site-packages/tqdm/notebook.py in __iter__(self)
252 def __iter__(self):
253 try:
--> 254 for obj in super(tqdm_notebook, self).__iter__():
255 # return super(tqdm...) will not catch exception
256 yield obj
~/.local/lib/python3.8/site-packages/tqdm/std.py in __iter__(self)
1171 # (note: keep this check outside the loop for performance)
1172 if self.disable:
-> 1173 for obj in iterable:
1174 yield obj
1175 return
/usr/lib/python3.8/concurrent/futures/_base.py in result_iterator()
617 # Careful not to keep a reference to the popped future
618 if timeout is None:
--> 619 yield fs.pop().result()
620 else:
621 yield fs.pop().result(end_time - time.monotonic())
/usr/lib/python3.8/concurrent/futures/_base.py in result(self, timeout)
442 raise CancelledError()
443 elif self._state == FINISHED:
--> 444 return self.__get_result()
445 else:
446 raise TimeoutError()
/usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self)
387 if self._exception:
388 try:
--> 389 raise self._exception
390 finally:
391 # Break a reference cycle with the exception in self._exception
/usr/lib/python3.8/concurrent/futures/thread.py in run(self)
55
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_single_origin_metadata_locally_or_by_urls(data_file, use_auth_token)
483 if isinstance(data_file, Url):
484 data_file = str(data_file)
--> 485 return (request_etag(data_file, use_auth_token=use_auth_token),)
486 else:
487 data_file = str(data_file.resolve())
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in request_etag(url, use_auth_token)
489 def request_etag(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> Optional[str]:
490 headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token)
--> 491 response = http_head(url, headers=headers, max_retries=3)
492 response.raise_for_status()
493 etag = response.headers.get("ETag") if response.ok else None
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries)
474 headers = copy.deepcopy(headers) or {}
475 headers["user-agent"] = get_datasets_user_agent(user_agent=headers.get("user-agent"))
--> 476 response = _request_with_retry(
477 method="HEAD",
478 url=url,
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
408 if tries > max_retries:
--> 409 raise err
410 else:
411 logger.info(f"{method} request to {url} timed out, retrying... [{tries/max_retries}]")
~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
403 tries += 1
404 try:
--> 405 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
406 success = True
407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
/usr/lib/python3/dist-packages/requests/api.py in request(method, url, **kwargs)
58 # cases, and look like a memory leak in others.
59 with sessions.Session() as session:
---> 60 return session.request(method=method, url=url, **kwargs)
61
62
/usr/lib/python3/dist-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
531 }
532 send_kwargs.update(settings)
--> 533 resp = self.send(prep, **send_kwargs)
534
535 return resp
/usr/lib/python3/dist-packages/requests/sessions.py in send(self, request, **kwargs)
644
645 # Send the request
--> 646 r = adapter.send(request, **kwargs)
647
648 # Total elapsed time of the request (approximately)
/usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
502 # TODO: Remove this in 3.0.0: see #2811
503 if not isinstance(e.reason, NewConnectionError):
--> 504 raise ConnectTimeout(e, request=request)
505
506 if isinstance(e.reason, ResponseError):
ConnectTimeout: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)'))
```
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 6.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3504/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3503/comments | https://api.github.com/repos/huggingface/datasets/issues/3503/events | https://github.com/huggingface/datasets/issues/3503 | 1,090,472,735 | I_kwDODunzps5A_0sf | 3,503 | Batched in filter throws error | {
"login": "gpucce",
"id": 32967787,
"node_id": "MDQ6VXNlcjMyOTY3Nzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gpucce",
"html_url": "https://github.com/gpucce",
"followers_url": "https://api.github.com/users/gpucce/followers",
"following_url": "https://api.github.com/users/gpucce/following{/other_user}",
"gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gpucce/subscriptions",
"organizations_url": "https://api.github.com/users/gpucce/orgs",
"repos_url": "https://api.github.com/users/gpucce/repos",
"events_url": "https://api.github.com/users/gpucce/events{/privacy}",
"received_events_url": "https://api.github.com/users/gpucce/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2021-12-29T12:01:04 | 2022-01-04T10:24:27 | 2022-01-04T10:24:27 | CONTRIBUTOR | null | null | null | I hope this is really a bug, I could not find it among the open issues
## Describe the bug
using `batched=False` in DataSet.filter throws error
```python
TypeError: filter() got an unexpected keyword argument 'batched'
```
but in the docs it is lister as an argument.
## Steps to reproduce the bug
```python
task = "mnli"
max_length = 128
tokenizer = AutoTokenizer.from_pretrained("./pretrained_models/pretrained_models_drozd/sl250.m.gsic.titech.ac.jp:8000/21.11.17_06.30.32_roberta-base_a0057/checkpoints/smpl_400M/hf/")
dataset = load_dataset("glue", task)
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mnli-mm": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
##### tokenization_parameters
sentence1_key, sentence2_key = task_to_keys[task]
def preprocess_function(examples, max_length):
if sentence2_key is None:
return tokenizer(
examples[sentence1_key], truncation=True, max_length=max_length
)
return tokenizer(
examples[sentence1_key],
examples[sentence2_key],
truncation=False,
padding="max_length",
max_length=max_length,
)
encoded_dataset = dataset.map(
lambda x: preprocess_function(x, max_length=max_length), batched=False
)
encoded_dataset.filter(lambda x: len(x['input_ids']) <= max_length, batched=False)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1, 1.17.0
- Platform: ubuntu
- Python version: 3.8.12
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3503/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3502/comments | https://api.github.com/repos/huggingface/datasets/issues/3502/events | https://github.com/huggingface/datasets/pull/3502 | 1,090,438,558 | PR_kwDODunzps4wXSLi | 3,502 | Add QuALITY | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"Thanks for your contribution, @jaketae. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 2021-12-29T10:58:46 | 2022-10-03T09:36:14 | 2022-10-03T09:36:14 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3502",
"html_url": "https://github.com/huggingface/datasets/pull/3502",
"diff_url": "https://github.com/huggingface/datasets/pull/3502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3502.patch",
"merged_at": null
} | Fixes #3441. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3502/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3501/comments | https://api.github.com/repos/huggingface/datasets/issues/3501/events | https://github.com/huggingface/datasets/pull/3501 | 1,090,413,758 | PR_kwDODunzps4wXM8H | 3,501 | Update pib dataset card | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-29T10:14:40 | 2021-12-29T11:13:21 | 2021-12-29T11:13:21 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3501",
"html_url": "https://github.com/huggingface/datasets/pull/3501",
"diff_url": "https://github.com/huggingface/datasets/pull/3501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3501.patch",
"merged_at": "2021-12-29T11:13:21"
} | Related to #3496 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3501/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3500/comments | https://api.github.com/repos/huggingface/datasets/issues/3500/events | https://github.com/huggingface/datasets/pull/3500 | 1,090,406,133 | PR_kwDODunzps4wXLTB | 3,500 | Docs: Add VCTK dataset description | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-29T10:02:05 | 2022-01-04T10:46:02 | 2022-01-04T10:25:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3500",
"html_url": "https://github.com/huggingface/datasets/pull/3500",
"diff_url": "https://github.com/huggingface/datasets/pull/3500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3500.patch",
"merged_at": "2022-01-04T10:25:09"
} | This PR is a very minor followup to #1837, with only docs changes (single comment string). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3500/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3499/comments | https://api.github.com/repos/huggingface/datasets/issues/3499/events | https://github.com/huggingface/datasets/issues/3499 | 1,090,132,618 | I_kwDODunzps5A-hqK | 3,499 | Adjusting chunk size for streaming datasets | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to increase `fsspec.spec.AbstractBufferedFile.DEFAULT_BLOCK_SIZE `\r\n\r\nCurrently this is unfortunately done in a single thread, so it blocks the processing to download and uncompress the next block. At one point it would be nice to be able to do that in parallel !",
"Hi! Thanks for the help, I will try it :)"
] | 2021-12-28T21:17:53 | 2022-05-06T16:29:05 | 2022-05-06T16:29:05 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing.
**Describe the solution you'd like**
I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3499/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3498/comments | https://api.github.com/repos/huggingface/datasets/issues/3498/events | https://github.com/huggingface/datasets/pull/3498 | 1,090,096,332 | PR_kwDODunzps4wWL5U | 3,498 | update `pretty_name` for first 200 datasets | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-28T19:50:07 | 2022-07-10T14:36:53 | 2022-01-05T16:38:21 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3498",
"html_url": "https://github.com/huggingface/datasets/pull/3498",
"diff_url": "https://github.com/huggingface/datasets/pull/3498.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3498.patch",
"merged_at": "2022-01-05T16:38:21"
} | I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were looking good to me! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3498/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3497/comments | https://api.github.com/repos/huggingface/datasets/issues/3497/events | https://github.com/huggingface/datasets/issues/3497 | 1,090,050,148 | I_kwDODunzps5A-Nhk | 3,497 | Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py",
"I'm seeing this too, when using preprocessing_num_workers with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py"
] | 2021-12-28T18:03:49 | 2022-01-21T13:22:27 | 2022-01-21T13:22:27 | MEMBER | null | null | null | Running:
```python
from datasets import load_dataset, DatasetDict
import datasets
from transformers import AutoFeatureExtractor
raw_datasets = DatasetDict()
raw_datasets["train"] = load_dataset("common_voice", "ab", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
raw_datasets = raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
)
num_workers = 16
def prepare_dataset(batch):
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
return batch
raw_datasets.map(
prepare_dataset,
remove_columns=next(iter(raw_datasets.values())).column_names,
num_proc=16,
desc="preprocess datasets",
)
```
gives
```bash
File "/home/patrick/experiments/run_bug.py", line 25, in <module>
raw_datasets.map(
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 492, in map
{
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 493, in <dictcomp>
k: dataset.map(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2139, in map
shards = [
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2140, in <listcomp>
self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 3164, in shard
return self.select(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/fingerprint.py", line 411, in wrapper
out = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2756, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2667, in _new_dataset_with_indices
return Dataset(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 659, in __init__
raise ValueError(
ValueError: External features info don't match the dataset:
Got
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
but expected something like
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
```
Versions:
```python
- `datasets` version: 1.16.2.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 6.0.1
```
and `transformers`:
```
- `transformers` version: 4.16.0.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3497/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3496/comments | https://api.github.com/repos/huggingface/datasets/issues/3496/events | https://github.com/huggingface/datasets/pull/3496 | 1,089,989,155 | PR_kwDODunzps4wV1_w | 3,496 | Update version of pib dataset and make it streamable | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It seems like there is still an error: `Message: 'TarContainedFile' object has no attribute 'readable'`\r\n\r\nhttps://huggingface.co/datasets/pib/viewer",
"@severo I was wondering about that...\r\n\r\nIt works fine when I run it in streaming mode in my terminal:\r\n```python\r\nIn [3]: from datasets import load_dataset; ds = load_dataset(\"pib\", \"gu-pa\", split=\"train\", streaming=True); item = next(iter(ds))\r\n\r\nIn [4]: item\r\nOut[4]: \r\n{'translation': {'gu': 'เชเชตเซ เชจเชฟเชฐเซเชฃเชฏ เชฒเซเชตเชพเชฏเซ เชนเชคเซ เชเซ เชเชเชคเชชเซเชฐเซเชตเชเชจเซ เชเชพเชฎเชเซเชฐเซ เชนเชพเชฅ เชงเชฐเชตเชพ, เชเชพเชฏเชฆเซเชธเชฐ เช
เชจเซ เชเซเชเชจเชฟเชเชฒ เชฎเซเชฒเซเชฏเชพเชเชเชจ เชเชฐเชตเชพ, เชตเซเชจเซเชเชฐ เชเซเชชเชฟเชเชฒ เชเชจเซเชตเซเชธเซเชเชฎเซเชจเซเช เชธเชฎเชฟเชคเชฟเชจเซ เชฌเซเช เช เชฏเซเชเชตเชพ เชตเชเซเชฐเซ เชเชเชเชเชซเชจเซ เชเชฐเชตเชพเชฎเชพเช เชเชตเซเชฒ เชชเซเชฐเชคเชฟเชฌเชฆเซเชงเชคเชพเชจเชพ 0.50 เชเชเชพ เชธเซเชงเซ เช
เชจเซ เชฌเชพเชเซเชจเซ เชฐเชเชฎ เชเชซเชเชซเชเชธเชจเซ เชชเซเชฐเซเชฃ เชเชฐเชตเชพเชฎเชพเช เชเชตเชถเซ.',\r\n 'pa': 'เจเจน เจตเฉ เจซเฉเจธเจฒเจพ เจเฉเจคเจพ เจเจฟเจ เจเจฟ เจเฉฑเจซเจเจเจเจ เจ
เจคเฉ เจฌเจเจพเจ เจฒเจ เจเฉเจคเฉเจเจ เจเจเจเจ เจตเจเจจเจฌเฉฑเจงเจคเจพเจตเจพเจ เจฆเฉ 0.50 % เจฆเฉ เจธเฉเจฎเจพ เจคเฉฑเจ เจเฉฑเจซเจเจเฉฑเจธ เจจเฉเฉฐ เจฎเจฟเจฒเจฟเจ เจเจพเจเจเจพ, เจเจธ เจจเจพเจฒ เจเฉฑเจฆเจฎ เจชเฉเฉฐเจเฉ เจจเจฟเจตเฉเจธเจผ เจเจฎเฉเจเฉ เจฆเฉ เจฌเฉเจ เจ เจฆเจพ เจเจฏเฉเจเจจ เจเจเจฟเจค เจธเจพเจตเจงเจพเจจเฉ, เจเจพเจจเฉเฉฐเจจเฉ เจ
เจคเฉ เจคเจเจจเฉเจเฉ เจฎเฉเฉฑเจฒเจพเจเจเจฃ เจฒเจ เจธเฉฐเจเจพเจฒเจจ เจเจฐเจ เจเจฆเจฟ เจฆเฉ เจชเฉเจฐเจคเฉ เจนเฉเจตเฉเจเฉเฅค'}}\r\n```",
"OK, it works now!\r\n\r\n<img width=\"794\" alt=\"Capture dโeฬcran 2022-01-03 aฬ 15 41 44\" src=\"https://user-images.githubusercontent.com/1676121/147943676-6199d1a9-f288-4350-af96-a7c297ebb743.png\">\r\n"
] | 2021-12-28T16:01:55 | 2022-01-03T14:42:28 | 2021-12-29T08:42:57 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3496",
"html_url": "https://github.com/huggingface/datasets/pull/3496",
"diff_url": "https://github.com/huggingface/datasets/pull/3496.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3496.patch",
"merged_at": "2021-12-29T08:42:57"
} | This PR:
- Updates version of pib dataset: from 0.0.0 to 1.3.0
- Makes the dataset streamable
Fix #3491.
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3496/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3495/comments | https://api.github.com/repos/huggingface/datasets/issues/3495/events | https://github.com/huggingface/datasets/issues/3495 | 1,089,983,632 | I_kwDODunzps5A99SQ | 3,495 | Add VoxLingua107 | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 2021-12-28T15:51:43 | 2021-12-28T15:51:43 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** VoxLingua107
- **Description:** VoxLingua107 is a speech dataset for training spoken language identification models.
- **Paper:** https://arxiv.org/abs/2011.12998
- **Data:** http://bark.phon.ioc.ee/voxlingua107/
- **Motivation:** 107 languages, totaling 6628 hours for the train split.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3495/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3495/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3494/comments | https://api.github.com/repos/huggingface/datasets/issues/3494/events | https://github.com/huggingface/datasets/pull/3494 | 1,089,983,103 | PR_kwDODunzps4wV0vB | 3,494 | Clone full repo to detect new tags when mirroring datasets on the Hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Good catch !!",
"The CI fail is unrelated to this PR and fixed on master, merging :)"
] | 2021-12-28T15:50:47 | 2021-12-28T16:07:21 | 2021-12-28T16:07:20 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3494",
"html_url": "https://github.com/huggingface/datasets/pull/3494",
"diff_url": "https://github.com/huggingface/datasets/pull/3494.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3494.patch",
"merged_at": "2021-12-28T16:07:20"
} | The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags.
By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly
cc @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3494/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3494/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3493/comments | https://api.github.com/repos/huggingface/datasets/issues/3493/events | https://github.com/huggingface/datasets/pull/3493 | 1,089,967,286 | PR_kwDODunzps4wVxfr | 3,493 | Fix VCTK encoding | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-28T15:23:36 | 2021-12-28T15:48:18 | 2021-12-28T15:48:17 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3493",
"html_url": "https://github.com/huggingface/datasets/pull/3493",
"diff_url": "https://github.com/huggingface/datasets/pull/3493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3493.patch",
"merged_at": "2021-12-28T15:48:17"
} | utf-8 encoding was missing in the VCTK dataset builder added in #3351 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3493/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3492/comments | https://api.github.com/repos/huggingface/datasets/issues/3492/events | https://github.com/huggingface/datasets/pull/3492 | 1,089,952,943 | PR_kwDODunzps4wVufr | 3,492 | Add `gzip` for `to_json` | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-28T15:01:11 | 2022-07-10T14:36:52 | 2022-01-05T13:03:36 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3492",
"html_url": "https://github.com/huggingface/datasets/pull/3492",
"diff_url": "https://github.com/huggingface/datasets/pull/3492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3492.patch",
"merged_at": "2022-01-05T13:03:35"
} | (Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3492/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3491/comments | https://api.github.com/repos/huggingface/datasets/issues/3491/events | https://github.com/huggingface/datasets/issues/3491 | 1,089,918,018 | I_kwDODunzps5A9tRC | 3,491 | Update version of pib dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2021-12-28T14:03:58 | 2021-12-29T08:42:57 | 2021-12-29T08:42:57 | MEMBER | null | null | null | On the Hub we have v0, while there exists v1.3.
Related to bigscience-workshop/data_tooling#130
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3491/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3490/comments | https://api.github.com/repos/huggingface/datasets/issues/3490/events | https://github.com/huggingface/datasets/issues/3490 | 1,089,730,181 | I_kwDODunzps5A8_aF | 3,490 | Does datasets support load text from HDFS? | {
"login": "dancingpipi",
"id": 20511825,
"node_id": "MDQ6VXNlcjIwNTExODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/20511825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dancingpipi",
"html_url": "https://github.com/dancingpipi",
"followers_url": "https://api.github.com/users/dancingpipi/followers",
"following_url": "https://api.github.com/users/dancingpipi/following{/other_user}",
"gists_url": "https://api.github.com/users/dancingpipi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dancingpipi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dancingpipi/subscriptions",
"organizations_url": "https://api.github.com/users/dancingpipi/orgs",
"repos_url": "https://api.github.com/users/dancingpipi/repos",
"events_url": "https://api.github.com/users/dancingpipi/events{/privacy}",
"received_events_url": "https://api.github.com/users/dancingpipi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)"
] | 2021-12-28T08:56:02 | 2022-02-14T14:00:51 | null | NONE | null | null | null | The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine,
so I wander does datasets support read data from hdfs? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3490/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3489/comments | https://api.github.com/repos/huggingface/datasets/issues/3489/events | https://github.com/huggingface/datasets/pull/3489 | 1,089,401,926 | PR_kwDODunzps4wT97d | 3,489 | Avoid unnecessary list creations | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@bryant1410 Thanks for working on this. Could you please split the PR into 4 or 5 smaller PRs (ideally one PR for each bullet point from your description) because it's not practical to review such a large PR, especially if the changes are not interrelated?"
] | 2021-12-27T18:20:56 | 2022-07-06T15:19:49 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3489",
"html_url": "https://github.com/huggingface/datasets/pull/3489",
"diff_url": "https://github.com/huggingface/datasets/pull/3489.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3489.patch",
"merged_at": null
} | Like in `join([... for s in ...])`. Also changed other things that I saw:
* Use a `with` statement for many `open` that missed them, so the files don't remain open.
* Remove unused variables.
* Many HTTP links converted into HTTPS (verified).
* Remove unnecessary "r" mode arg in `open` (double-checked it was actually the default in each case).
* Remove Python 2 style of using `super`.
* Run `pyupgrade $(find . -name "*.py" -type f) --py36-plus` (which already does some of the previous points).
* Run `dos2unix $(find . -name "*.py" -type f)` (CRLF to LF line endings).
* Fix typos. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3489/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3488/comments | https://api.github.com/repos/huggingface/datasets/issues/3488/events | https://github.com/huggingface/datasets/issues/3488 | 1,089,345,653 | I_kwDODunzps5A7hh1 | 3,488 | URL query parameters are set as path in the compression hop for fsspec | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I think the test passes because it simply ignore what's after `gzip://`.\r\n\r\nThe returned urlpath is expected to look like `gzip://filename::url`, and the filename is currently considered to be what's after the final `/`, hence the result.\r\n\r\nWe can decide to change this and simply have `gzip://::url`, this way we don't need to guess the filename, what do you think ?"
] | 2021-12-27T16:29:00 | 2022-01-05T15:15:25 | null | MEMBER | null | null | null | ## Describe the bug
There is an ssue with `StreamingDownloadManager._extract`.
I don't know how the test `test_streaming_gg_drive_gzipped` passes:
For
```python
TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz"
urlpath = StreamingDownloadManager().download_and_extract(TEST_GG_DRIVE_GZIPPED_URL)
```
gives `urlpath`:
```python
'gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz::https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz'
```
The gzip path makes no sense: `gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz`
## Steps to reproduce the bug
```python
from datasets.utils.streaming_download_manager import StreamingDownloadManager
dl_manager = StreamingDownloadManager()
urlpath = dl_manager.extract("https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz")
print(urlpath)
```
## Expected results
The query parameters should not be set as part of the path.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3488/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3487/comments | https://api.github.com/repos/huggingface/datasets/issues/3487/events | https://github.com/huggingface/datasets/pull/3487 | 1,089,209,031 | PR_kwDODunzps4wTVeN | 3,487 | Update ADD_NEW_DATASET.md | {
"login": "apergo-ai",
"id": 68908804,
"node_id": "MDQ6VXNlcjY4OTA4ODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apergo-ai",
"html_url": "https://github.com/apergo-ai",
"followers_url": "https://api.github.com/users/apergo-ai/followers",
"following_url": "https://api.github.com/users/apergo-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions",
"organizations_url": "https://api.github.com/users/apergo-ai/orgs",
"repos_url": "https://api.github.com/users/apergo-ai/repos",
"events_url": "https://api.github.com/users/apergo-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/apergo-ai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-27T12:24:51 | 2021-12-27T15:00:45 | 2021-12-27T15:00:45 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3487",
"html_url": "https://github.com/huggingface/datasets/pull/3487",
"diff_url": "https://github.com/huggingface/datasets/pull/3487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3487.patch",
"merged_at": "2021-12-27T15:00:45"
} | fixed make style prompt for Windows Terminal | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3487/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3486/comments | https://api.github.com/repos/huggingface/datasets/issues/3486/events | https://github.com/huggingface/datasets/pull/3486 | 1,089,171,551 | PR_kwDODunzps4wTNd1 | 3,486 | Fix weird spacing in ManualDownloadError message | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-27T11:20:36 | 2021-12-28T09:03:26 | 2021-12-28T09:00:28 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3486",
"html_url": "https://github.com/huggingface/datasets/pull/3486",
"diff_url": "https://github.com/huggingface/datasets/pull/3486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3486.patch",
"merged_at": "2021-12-28T09:00:28"
} | `textwrap.dedent` works based on the spaces at the beginning. Before this change, there wasn't any space. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3486/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3485/comments | https://api.github.com/repos/huggingface/datasets/issues/3485/events | https://github.com/huggingface/datasets/issues/3485 | 1,089,027,581 | I_kwDODunzps5A6T39 | 3,485 | skip columns which cannot set to specific format when set_format | {
"login": "tshu-w",
"id": 13161779,
"node_id": "MDQ6VXNlcjEzMTYxNzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshu-w",
"html_url": "https://github.com/tshu-w",
"followers_url": "https://api.github.com/users/tshu-w/followers",
"following_url": "https://api.github.com/users/tshu-w/following{/other_user}",
"gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions",
"organizations_url": "https://api.github.com/users/tshu-w/orgs",
"repos_url": "https://api.github.com/users/tshu-w/repos",
"events_url": "https://api.github.com/users/tshu-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshu-w/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns",
"Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific columns I assigned."
] | 2021-12-27T07:19:55 | 2021-12-27T09:07:07 | 2021-12-27T09:07:07 | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns.
**Describe the solution you'd like**
skip columns which cannot set to specific format when set_format instead of raise an error.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3485/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3484/comments | https://api.github.com/repos/huggingface/datasets/issues/3484/events | https://github.com/huggingface/datasets/issues/3484 | 1,088,910,402 | I_kwDODunzps5A53RC | 3,484 | make shape verification to use ArrayXD instead of nested lists for map | {
"login": "tshu-w",
"id": 13161779,
"node_id": "MDQ6VXNlcjEzMTYxNzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshu-w",
"html_url": "https://github.com/tshu-w",
"followers_url": "https://api.github.com/users/tshu-w/followers",
"following_url": "https://api.github.com/users/tshu-w/following{/other_user}",
"gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions",
"organizations_url": "https://api.github.com/users/tshu-w/orgs",
"repos_url": "https://api.github.com/users/tshu-w/repos",
"events_url": "https://api.github.com/users/tshu-w/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshu-w/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! \r\n\r\nYes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic."
] | 2021-12-27T02:16:02 | 2022-01-05T13:54:03 | null | NONE | null | null | null | As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3484/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3483/comments | https://api.github.com/repos/huggingface/datasets/issues/3483/events | https://github.com/huggingface/datasets/pull/3483 | 1,088,784,157 | PR_kwDODunzps4wSAW4 | 3,483 | Remove unused phony rule from Makefile | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI failure is unrelated to this PR and fixed on master, merging !"
] | 2021-12-26T14:37:13 | 2022-01-05T19:44:56 | 2022-01-05T16:34:12 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3483",
"html_url": "https://github.com/huggingface/datasets/pull/3483",
"diff_url": "https://github.com/huggingface/datasets/pull/3483.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3483.patch",
"merged_at": "2022-01-05T16:34:12"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3483/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3482/comments | https://api.github.com/repos/huggingface/datasets/issues/3482/events | https://github.com/huggingface/datasets/pull/3482 | 1,088,317,921 | PR_kwDODunzps4wQqE1 | 3,482 | Fix duplicate keys in NewsQA | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"Flaky tests?",
"Thanks for your contribution, @bryant1410.\r\n\r\nI think the fix of the duplicate key in this PR was superseded by:\r\n- #3696\r\n\r\nI'm closing this because we are moving all dataset scripts from GitHub to the Hugging Face Hub."
] | 2021-12-24T11:01:59 | 2022-09-23T12:57:10 | 2022-09-23T12:57:10 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3482",
"html_url": "https://github.com/huggingface/datasets/pull/3482",
"diff_url": "https://github.com/huggingface/datasets/pull/3482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3482.patch",
"merged_at": null
} | * Fix duplicate keys in NewsQA when loading from CSV files.
* Fix s/narqa/newsqa/ in the download manually error message.
* Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues.
* Fix the format of the license text.
* Reformat the code to make it simpler. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3482/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3481/comments | https://api.github.com/repos/huggingface/datasets/issues/3481/events | https://github.com/huggingface/datasets/pull/3481 | 1,088,308,343 | PR_kwDODunzps4wQoJu | 3,481 | Fix overriding of filesystem info | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-24T10:42:31 | 2021-12-24T11:08:59 | 2021-12-24T11:08:59 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3481",
"html_url": "https://github.com/huggingface/datasets/pull/3481",
"diff_url": "https://github.com/huggingface/datasets/pull/3481.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3481.patch",
"merged_at": "2021-12-24T11:08:59"
} | Previously, `BaseCompressedFileFileSystem.info` was overridden and transformed from function to dict.
This generated a bug for filesystem methods that use `self.info()`, like e.g. `fs.isfile()`.
This PR:
- Adds tests for `fs.isfile` (that use `fs.info`).
- Fixes custom `BaseCompressedFileFileSystem.info` by removing its overriding. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3481/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3480/comments | https://api.github.com/repos/huggingface/datasets/issues/3480/events | https://github.com/huggingface/datasets/issues/3480 | 1,088,267,110 | I_kwDODunzps5A3aNm | 3,480 | the compression format requested when saving a dataset in json format is not respected | {
"login": "SaulLu",
"id": 55560583,
"node_id": "MDQ6VXNlcjU1NTYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaulLu",
"html_url": "https://github.com/SaulLu",
"followers_url": "https://api.github.com/users/SaulLu/followers",
"following_url": "https://api.github.com/users/SaulLu/following{/other_user}",
"gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions",
"organizations_url": "https://api.github.com/users/SaulLu/orgs",
"repos_url": "https://api.github.com/users/SaulLu/repos",
"events_url": "https://api.github.com/users/SaulLu/events{/privacy}",
"received_events_url": "https://api.github.com/users/SaulLu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either handling directly the `compression` parameter ourselves\r\n- or refactoring to pass non-null path or buffer to pandas\r\n\r\nCC: @lhoestq",
"I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week",
"Hi ! Thanks for your help @bhavitvyamalik :)\r\nMaybe let's start with `gzip` ? I think it's the most common use case, then if we're fine with it we can add other compression methods"
] | 2021-12-24T09:23:51 | 2022-01-05T13:03:35 | 2022-01-05T13:03:35 | CONTRIBUTOR | null | null | null | ## Describe the bug
In the documentation of the `to_json` method, it is stated in the parameters that
> **to_json_kwargs โ Parameters to pass to pandasโs pandas.DataFrame.to_json.
however when we pass for example `compression="gzip"`, the saved file is not compressed.
Would you also have expected compression to be applied? :relaxed:
## Steps to reproduce the bug
```python
my_dict = {"a": [1, 2, 3], "b": [1, 2, 3]}
```
### Result with datasets
```python
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)
dataset.to_json("dic_with_datasets.jsonl.gz", compression="gzip")
!cat dic_with_datasets.jsonl.gz
```
output
```
{"a":1,"b":1}
{"a":2,"b":2}
{"a":3,"b":3}
```
Note: I would expected to see binary data here
### Result with pandas
```python
import pandas as pd
df = pd.DataFrame(my_dict)
df.to_json("dic_with_pandas.jsonl.gz", lines=True, orient="records", compression="gzip")
!cat dic_with_pandas.jsonl.gz
```
output
```
4๏ฟฝ๏ฟฝa๏ฟฝdic_with_pandas.jsonl๏ฟฝ๏ฟฝVJT๏ฟฝ2๏ฟฝQJ๏ฟฝ๏ฟฝ\๏ฟฝ ๏ฟฝ๏ฟฝg๏ฟฝ๏ฟฝyฦต๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ๏ฟฝ)๏ฟฝ๏ฟฝ๏ฟฝ
```
Note: It looks like binary data
## Expected results
I would have expected that the saved result with datasets would also be a binary file
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3480/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3479/comments | https://api.github.com/repos/huggingface/datasets/issues/3479/events | https://github.com/huggingface/datasets/issues/3479 | 1,088,232,880 | I_kwDODunzps5A3R2w | 3,479 | Dataset preview is not available (I think for all Hugging Face datasets) | {
"login": "Abirate",
"id": 66887439,
"node_id": "MDQ6VXNlcjY2ODg3NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/66887439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Abirate",
"html_url": "https://github.com/Abirate",
"followers_url": "https://api.github.com/users/Abirate/followers",
"following_url": "https://api.github.com/users/Abirate/following{/other_user}",
"gists_url": "https://api.github.com/users/Abirate/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Abirate/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Abirate/subscriptions",
"organizations_url": "https://api.github.com/users/Abirate/orgs",
"repos_url": "https://api.github.com/users/Abirate/repos",
"events_url": "https://api.github.com/users/Abirate/events{/privacy}",
"received_events_url": "https://api.github.com/users/Abirate/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"You're right, we have an issue today with the datasets preview. We're investigating.",
"It should be fixed now. Thanks for reporting.",
"Down again. ",
"Fixed for good."
] | 2021-12-24T08:18:48 | 2021-12-24T14:27:46 | 2021-12-24T14:27:46 | NONE | null | null | null | ## Dataset viewer issue for '*french_book_reviews*'
**Link:** https://huggingface.co/datasets/Abirate/french_book_reviews
**short description of the issue**
For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...)
And, after looking over the datasets, I discovered that this issue affects all Hugging Face datasets (as of yesterday, December 23, 2021, around 10 p.m. (CET)).
**Am I the one who added this dataset** : Yes
**Note**: here a screenshot showing the issue
![Dataset preview is not available for my dataset](https://user-images.githubusercontent.com/66887439/147333078-60734578-420d-4e91-8691-a90afeaa8948.jpg)
**And here for glue dataset :**
![Dataset preview is not available for other Hugging Face datasets(glue)](https://user-images.githubusercontent.com/66887439/147333492-26fa530c-befd-4992-8361-70c51397a25a.jpg)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3479/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3479/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3478/comments | https://api.github.com/repos/huggingface/datasets/issues/3478/events | https://github.com/huggingface/datasets/pull/3478 | 1,087,860,180 | PR_kwDODunzps4wPMWq | 3,478 | Extend support for streaming datasets that use os.walk | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice. I'll update the dataset viewer once merged, and test on these four datasets"
] | 2021-12-23T16:42:55 | 2021-12-24T10:50:20 | 2021-12-24T10:50:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3478",
"html_url": "https://github.com/huggingface/datasets/pull/3478",
"diff_url": "https://github.com/huggingface/datasets/pull/3478.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3478.patch",
"merged_at": "2021-12-24T10:50:19"
} | This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function.
This PR adds support for streaming mode to datasets:
1. autshumato
1. code_x_glue_cd_code_to_text
1. code_x_glue_tc_nl_code_search_adv
1. nchlt
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3478/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3477/comments | https://api.github.com/repos/huggingface/datasets/issues/3477/events | https://github.com/huggingface/datasets/pull/3477 | 1,087,850,253 | PR_kwDODunzps4wPKPX | 3,477 | Use `iter_files` instead of `str(Path(...)` in image dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`iter_archive` is about to support ZIP archives. I think we should use this no ?\r\n\r\nsee #3347 https://github.com/huggingface/datasets/pull/3379",
"I was interested in the support for isfile/dir in remote.\r\n\r\nAnyway, `iter_files` will be available for community users.",
"I'm not a big fan of having two functions that do the same thing. What do you think ?",
"They do not do the same thing:\r\n- One iterates over files in a directory\r\n- The other I guess will iterate over the members of an archive",
"Makes sense ! Sounds good then - sorry for my misunderstanding\r\n\r\nNote that `iter_archive` will be more performant for data streaming that `iter_files` thanks to the buffering so maybe in the future we can `iter_archive` for some of these datasets",
"Yes, @lhoestq I agree with you: once `iter_archive` supports zip files, it will be more suitable than `iter_files` for these 2 datasets.\r\n\r\nAnyway, this PR also implements `isfile`/`isdir` in streaming mode, besides fixing `iter_files`. And I'm interested in having those in master.\r\n\r\nMaybe, could we merge this PR into master and take note to refactor the datasets to use `iter_archive` once zip is supported?\r\nOther option could be to split this PR into 2..."
] | 2021-12-23T16:26:55 | 2021-12-28T15:15:02 | 2021-12-28T15:15:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3477",
"html_url": "https://github.com/huggingface/datasets/pull/3477",
"diff_url": "https://github.com/huggingface/datasets/pull/3477.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3477.patch",
"merged_at": "2021-12-28T15:15:02"
} | Use `iter_files` in the `beans` and the `cats_vs_dogs` dataset scripts as suggested by @albertvillanova.
Additional changes:
* Fix `iter_files` in `MockDownloadManager` (see this [CI error](https://app.circleci.com/pipelines/github/huggingface/datasets/9247/workflows/2657ff8a-b531-4fd9-a9fc-6541a72e8d83/jobs/57028))
* Add support for `os.path.isdir` and `os.path.isfile` in streaming (`os.path.isfile` is needed in `StreamingDownloadManager`'s `iter_files` to make `cats_vs_dogs` streamable)
TODO:
- [ ] add tests for `xisdir` and `xisfile` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3477/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3476/comments | https://api.github.com/repos/huggingface/datasets/issues/3476/events | https://github.com/huggingface/datasets/pull/3476 | 1,087,622,872 | PR_kwDODunzps4wOZ8a | 3,476 | Extend support for streaming datasets that use ET.parse | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-23T11:18:46 | 2021-12-23T15:34:30 | 2021-12-23T15:34:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3476",
"html_url": "https://github.com/huggingface/datasets/pull/3476",
"diff_url": "https://github.com/huggingface/datasets/pull/3476.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3476.patch",
"merged_at": "2021-12-23T15:34:30"
} | This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function.
This PR adds support for streaming mode to datasets:
1. ami
1. assin
1. assin2
1. counter
1. enriched_web_nlg
1. europarl_bilingual
1. hyperpartisan_news_detection
1. polsum
1. qa4mre
1. quail
1. ted_talks_iwslt
1. udhr
1. web_nlg
1. winograd_wsc
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3476/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3476/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3475/comments | https://api.github.com/repos/huggingface/datasets/issues/3475/events | https://github.com/huggingface/datasets/issues/3475 | 1,087,352,041 | I_kwDODunzps5Az6zp | 3,475 | The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish | {
"login": "puzzler10",
"id": 17426779,
"node_id": "MDQ6VXNlcjE3NDI2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puzzler10",
"html_url": "https://github.com/puzzler10",
"followers_url": "https://api.github.com/users/puzzler10/followers",
"following_url": "https://api.github.com/users/puzzler10/following{/other_user}",
"gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions",
"organizations_url": "https://api.github.com/users/puzzler10/orgs",
"repos_url": "https://api.github.com/users/puzzler10/repos",
"events_url": "https://api.github.com/users/puzzler10/events{/privacy}",
"received_events_url": "https://api.github.com/users/puzzler10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you should contact the data owners/creators instead. In the homepage associated with this dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/), you can find the authors of the dataset and how to contact them:\r\n> If you have any questions or comments regarding this site, please send email to Bo Pang or Lillian Lee.\r\n\r\nP.S.: Please also note that the example you gave of non-English review is in Portuguese (not Spanish). ;)",
"Maybe best to just put a quick sentence in the dataset description that highlights this? "
] | 2021-12-23T03:56:43 | 2021-12-24T00:23:03 | null | NONE | null | null | null | ## Describe the bug
See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user.
## Steps to reproduce the bug
Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that.
## Expected results
English movie reviews only.
## Actual results
Example of a Spanish movie review (4173):
> "ร uma pena que , mais tarde , o prรณprio filme abandone o tom de parรณdia e passe a utilizar os mesmos clichรชs que havia satirizado "
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3475/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3475/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3474/comments | https://api.github.com/repos/huggingface/datasets/issues/3474/events | https://github.com/huggingface/datasets/pull/3474 | 1,086,945,384 | PR_kwDODunzps4wMMt0 | 3,474 | Decode images when iterating | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-22T15:34:49 | 2021-12-28T16:08:10 | 2021-12-28T16:08:10 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3474",
"html_url": "https://github.com/huggingface/datasets/pull/3474",
"diff_url": "https://github.com/huggingface/datasets/pull/3474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3474.patch",
"merged_at": null
} | If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned.
This PR enables image decoding in `Dataset.__iter__`
Close https://github.com/huggingface/datasets/issues/3473 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3474/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3473/comments | https://api.github.com/repos/huggingface/datasets/issues/3473/events | https://github.com/huggingface/datasets/issues/3473 | 1,086,937,610 | I_kwDODunzps5AyVoK | 3,473 | Iterating over a vision dataset doesn't decode the images | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | closed | false | null | [] | null | [
"As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.",
"> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.",
"@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================",
"Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).",
"> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n",
"Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)",
"For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.",
"Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed.",
"Thanks for the details :)\r\n\r\nI still think that it's unexpected to get different results when doing\r\n```python\r\nfor i in range(len(dataset)):\r\n sample = dataset[i]\r\n```\r\nand\r\n```python\r\nfor sample in dataset:\r\n pass\r\n```\r\neven though I understand that if you don't need to decode the data, then decoding image or audio data when iterating is a waste of time and resources.\r\n\r\nBut in this case users can still drop the column that need decoding to get the full speed back no ?"
] | 2021-12-22T15:26:32 | 2021-12-27T14:13:21 | 2021-12-23T15:21:57 | MEMBER | null | null | null | ## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_dataset("mnist", split="train")
first_image = mnist[0]["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes
first_image = next(iter(mnist))["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails
```
## Expected results
The image should be decoded, as a PIL Image
## Actual results
We get a dictionary
```
{'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None}
```
## Environment info
- `datasets` version: 1.17.1.dev0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyArrow version: 6.0.0
The bug also exists in 1.17.0
## Investigation
I think the issue is that decoding is disabled in `__iter__`:
https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661
Do you remember why it was disabled in the first place @albertvillanova ?
Also cc @mariosasko @NielsRogge
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3473/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3472/comments | https://api.github.com/repos/huggingface/datasets/issues/3472/events | https://github.com/huggingface/datasets/pull/3472 | 1,086,908,508 | PR_kwDODunzps4wMEwA | 3,472 | Fix `str(Path(...))` conversion in streaming on Linux | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-22T15:06:03 | 2021-12-22T16:52:53 | 2021-12-22T16:52:52 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3472",
"html_url": "https://github.com/huggingface/datasets/pull/3472",
"diff_url": "https://github.com/huggingface/datasets/pull/3472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3472.patch",
"merged_at": "2021-12-22T16:52:52"
} | Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3472/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3472/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3471/comments | https://api.github.com/repos/huggingface/datasets/issues/3471/events | https://github.com/huggingface/datasets/pull/3471 | 1,086,588,074 | PR_kwDODunzps4wLAk6 | 3,471 | Fix Tashkeela dataset to yield stripped text | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-22T08:41:30 | 2021-12-22T10:12:08 | 2021-12-22T10:12:07 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3471",
"html_url": "https://github.com/huggingface/datasets/pull/3471",
"diff_url": "https://github.com/huggingface/datasets/pull/3471.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3471.patch",
"merged_at": "2021-12-22T10:12:07"
} | This PR:
- Yields stripped text
- Fix path for Windows
- Adds license
- Adds more info in dataset card
Close bigscience-workshop/data_tooling#279 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3471/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3470/comments | https://api.github.com/repos/huggingface/datasets/issues/3470/events | https://github.com/huggingface/datasets/pull/3470 | 1,086,049,888 | PR_kwDODunzps4wJO8t | 3,470 | Fix rendering of docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-21T17:17:01 | 2021-12-22T09:23:47 | 2021-12-22T09:23:47 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3470",
"html_url": "https://github.com/huggingface/datasets/pull/3470",
"diff_url": "https://github.com/huggingface/datasets/pull/3470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3470.patch",
"merged_at": "2021-12-22T09:23:47"
} | Minor fix in docs.
Currently, `ClassLabel` docstring rendering is not right. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3470/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3469/comments | https://api.github.com/repos/huggingface/datasets/issues/3469/events | https://github.com/huggingface/datasets/pull/3469 | 1,085,882,664 | PR_kwDODunzps4wIrOV | 3,469 | Fix METEOR missing NLTK's omw-1.4 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I also modified the doctest call to raise the exception that doctest may catch, instead of `doctest.UnexpectedException`.\r\nThis will make debugging easier if it happens again"
] | 2021-12-21T14:19:11 | 2021-12-21T14:52:28 | 2021-12-21T14:49:28 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3469",
"html_url": "https://github.com/huggingface/datasets/pull/3469",
"diff_url": "https://github.com/huggingface/datasets/pull/3469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3469.patch",
"merged_at": "2021-12-21T14:49:28"
} | NLTK 3.6.6 now requires `omw-1.4` to be downloaded for METEOR to work.
This should fix the CI on master | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3469/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3468/comments | https://api.github.com/repos/huggingface/datasets/issues/3468/events | https://github.com/huggingface/datasets/pull/3468 | 1,085,871,301 | PR_kwDODunzps4wIozO | 3,468 | Add COCO dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"The CI failures other than a missing dummy data file and missing fields in the card are unrelated to this PR. ",
"Thanks a lot for this great work and fixing TFDS based script @mariosasko ๐ค will generate the dummy dataset and write the model card tomorrow!",
"@mariosasko I added the dataset card, I'm on the dummy data rn. ",
"@merveenoyan Let me know if you need any help with the dummy data.\r\n\r\nI plan to split the current script/dataset into 4 smaller scripts/datasets to make sure they are properly indexed by Papers With Code later on. In this format:\r\n* the `*_image_captioning` configs will form the [COCO Captions](https://paperswithcode.com/sota/image-captioning-on-coco-captions) dataset (also present in TFDS, but only the 2017 version)\r\n* the `stuff_segmentation` config will form the [COCO Stuff](https://paperswithcode.com/dataset/coco-stuff) dataset\r\n* the `desnepose` config will form the [DensePose-COCO](https://paperswithcode.com/dataset/densepose) dataset\r\n* the rest will be [COCO](https://paperswithcode.com/dataset/coco) (+ will add the `minival` and the `valminusminival` splits to COCO 2014)\r\n\r\nAlso, if I find the time, I'll add preprocessing examples that rely on `pycocotools` to the README files.",
"@mariosasko I feel like we can just push main COCO and add Captions + Stuff later, WDYT?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your contribution, @mariosasko and @merveenoyan. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | 2021-12-21T14:07:50 | 2022-10-03T09:38:07 | 2022-10-03T09:36:08 | CONTRIBUTOR | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3468",
"html_url": "https://github.com/huggingface/datasets/pull/3468",
"diff_url": "https://github.com/huggingface/datasets/pull/3468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3468.patch",
"merged_at": null
} | This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection.
Some notes:
* the data exposed by TFDS is contained in the `2014`, `2015`, `2017` and `2017_panoptic_segmentation` configs here
* I've updated `encode_nested_example` for easier handling of missing values (cc @lhoestq @albertvillanova; will add tests if you are OK with the changes in `features.py`)
* this implementation should fix https://github.com/huggingface/datasets/pull/3377#issuecomment-985559427
TODOs:
- [x] dataset card
- [ ] dummy data
cc @merveenoyan
Closes #2526 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3468/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3468/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3467/comments | https://api.github.com/repos/huggingface/datasets/issues/3467/events | https://github.com/huggingface/datasets/pull/3467 | 1,085,870,665 | PR_kwDODunzps4wIoqd | 3,467 | Push dataset infos.json to Hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The change from `___` to `--` was allowed by https://github.com/huggingface/moon-landing/pull/1657"
] | 2021-12-21T14:07:13 | 2021-12-21T17:00:10 | 2021-12-21T17:00:09 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3467",
"html_url": "https://github.com/huggingface/datasets/pull/3467",
"diff_url": "https://github.com/huggingface/datasets/pull/3467.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3467.patch",
"merged_at": "2021-12-21T17:00:09"
} | When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394).
This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types.
Other minor changes:
- renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end.
I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost.
Close https://github.com/huggingface/datasets/issues/3394
I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3467/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3467/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3466/comments | https://api.github.com/repos/huggingface/datasets/issues/3466/events | https://github.com/huggingface/datasets/pull/3466 | 1,085,722,837 | PR_kwDODunzps4wII3w | 3,466 | Add CRASS dataset | {
"login": "apergo-ai",
"id": 68908804,
"node_id": "MDQ6VXNlcjY4OTA4ODA0",
"avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apergo-ai",
"html_url": "https://github.com/apergo-ai",
"followers_url": "https://api.github.com/users/apergo-ai/followers",
"following_url": "https://api.github.com/users/apergo-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions",
"organizations_url": "https://api.github.com/users/apergo-ai/orgs",
"repos_url": "https://api.github.com/users/apergo-ai/repos",
"events_url": "https://api.github.com/users/apergo-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/apergo-ai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | null | [
"Hi Albert,\r\nThank you for your comments.\r\nI hope I have uploaded my local git repo to include the dummy files and style reworkings.\r\nAdded YAML in Readme as well.\r\n\r\nPlease check again.\r\n\r\nHope it works now :)",
"Thanks for your contribution, @apergo-ai. \r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. It's OK for you? Please, feel free to tell us if you need some help."
] | 2021-12-21T11:17:22 | 2022-10-03T09:37:06 | 2022-10-03T09:37:06 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3466",
"html_url": "https://github.com/huggingface/datasets/pull/3466",
"diff_url": "https://github.com/huggingface/datasets/pull/3466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3466.patch",
"merged_at": null
} | Added crass dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3466/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3465/comments | https://api.github.com/repos/huggingface/datasets/issues/3465/events | https://github.com/huggingface/datasets/issues/3465 | 1,085,400,432 | I_kwDODunzps5AseVw | 3,465 | Unable to load 'cnn_dailymail' dataset | {
"login": "talha1503",
"id": 42352729,
"node_id": "MDQ6VXNlcjQyMzUyNzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/42352729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talha1503",
"html_url": "https://github.com/talha1503",
"followers_url": "https://api.github.com/users/talha1503/followers",
"following_url": "https://api.github.com/users/talha1503/following{/other_user}",
"gists_url": "https://api.github.com/users/talha1503/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talha1503/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talha1503/subscriptions",
"organizations_url": "https://api.github.com/users/talha1503/orgs",
"repos_url": "https://api.github.com/users/talha1503/repos",
"events_url": "https://api.github.com/users/talha1503/events{/privacy}",
"received_events_url": "https://api.github.com/users/talha1503/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?",
"This looks related to https://github.com/huggingface/datasets/issues/996",
"It seems that [this](https://huggingface.co/datasets/ccdv/cnn_dailymail) copy of the dataset has fixed the problem"
] | 2021-12-21T03:32:21 | 2022-02-17T14:13:57 | 2022-02-17T14:13:57 | NONE | null | null | null | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True)
```
## Expected results
Expecting to load 'cnn_dailymail' dataset.
## Actual results
`NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3465/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3464/comments | https://api.github.com/repos/huggingface/datasets/issues/3464/events | https://github.com/huggingface/datasets/issues/3464 | 1,085,399,097 | I_kwDODunzps5AseA5 | 3,464 | struct.error: 'i' format requires -2147483648 <= number <= 2147483647 | {
"login": "koukoulala",
"id": 30341159,
"node_id": "MDQ6VXNlcjMwMzQxMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/koukoulala",
"html_url": "https://github.com/koukoulala",
"followers_url": "https://api.github.com/users/koukoulala/followers",
"following_url": "https://api.github.com/users/koukoulala/following{/other_user}",
"gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions",
"organizations_url": "https://api.github.com/users/koukoulala/orgs",
"repos_url": "https://api.github.com/users/koukoulala/repos",
"events_url": "https://api.github.com/users/koukoulala/events{/privacy}",
"received_events_url": "https://api.github.com/users/koukoulala/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Can you try setting `datasets.config.MAX_TABLE_NBYTES_FOR_PICKLING` to a smaller value than `4 << 30` (4GiB), for example `500 << 20` (500MiB) ? It should reduce the maximum size of the arrow table being pickled during multiprocessing.\r\n\r\nIf it fixes the issue, we can consider lowering the default value for everyone.",
"@lhoestq I tried that just now but didn't seem to help."
] | 2021-12-21T03:29:01 | 2022-11-21T19:55:11 | null | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
using latest datasets=datasets-1.16.1-py3-none-any.whl
process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256:
![image](https://user-images.githubusercontent.com/30341159/146865779-3d25d011-1f42-4026-9e1b-76f6e1d172e9.png)
then I get this error:
![image](https://user-images.githubusercontent.com/30341159/146865844-e60a404c-5f3a-403c-b2f1-acd943b5cdb8.png)
I have seen the issue in #2134 and #2150, so I don't understand why latest repo still can't deal with big dataset.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: linux docker
- Python version: 3.6
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3464/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3463/comments | https://api.github.com/repos/huggingface/datasets/issues/3463/events | https://github.com/huggingface/datasets/pull/3463 | 1,085,078,795 | PR_kwDODunzps4wGB4P | 3,463 | Update swahili_news dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-20T18:20:20 | 2021-12-21T06:24:03 | 2021-12-21T06:24:02 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3463",
"html_url": "https://github.com/huggingface/datasets/pull/3463",
"diff_url": "https://github.com/huggingface/datasets/pull/3463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3463.patch",
"merged_at": "2021-12-21T06:24:01"
} | Update dataset with latest verion data files.
Fix #3462.
Close bigscience-workshop/data_tooling#107 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3463/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3462/comments | https://api.github.com/repos/huggingface/datasets/issues/3462/events | https://github.com/huggingface/datasets/issues/3462 | 1,085,049,661 | I_kwDODunzps5ArIs9 | 3,462 | Update swahili_news dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2021-12-20T17:44:01 | 2021-12-21T06:24:02 | 2021-12-21T06:24:01 | MEMBER | null | null | null | Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203.
## Adding a Dataset
- **Name:** swahili_news
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Related to:
- bigscience-workshop/data_tooling#107
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3462/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3461/comments | https://api.github.com/repos/huggingface/datasets/issues/3461/events | https://github.com/huggingface/datasets/pull/3461 | 1,085,007,346 | PR_kwDODunzps4wFzDP | 3,461 | Fix links in metrics description | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-20T16:56:19 | 2021-12-20T17:14:52 | 2021-12-20T17:14:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3461",
"html_url": "https://github.com/huggingface/datasets/pull/3461",
"diff_url": "https://github.com/huggingface/datasets/pull/3461.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3461.patch",
"merged_at": "2021-12-20T17:14:51"
} | Remove Markdown syntax for links in metrics description, as it is not properly rendered.
Related to #3437. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3461/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3460/comments | https://api.github.com/repos/huggingface/datasets/issues/3460/events | https://github.com/huggingface/datasets/pull/3460 | 1,085,002,469 | PR_kwDODunzps4wFyCf | 3,460 | Don't encode lists as strings when using `Value("string")` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2021-12-20T16:50:49 | 2022-07-06T15:19:49 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3460",
"html_url": "https://github.com/huggingface/datasets/pull/3460",
"diff_url": "https://github.com/huggingface/datasets/pull/3460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3460.patch",
"merged_at": null
} | Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error.
This PR fixes this and should fix the issue with WER showing low values if the input format is not right. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3460/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3459/comments | https://api.github.com/repos/huggingface/datasets/issues/3459/events | https://github.com/huggingface/datasets/issues/3459 | 1,084,969,672 | I_kwDODunzps5Aq1LI | 3,459 | dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected. | {
"login": "mmajurski",
"id": 9354454,
"node_id": "MDQ6VXNlcjkzNTQ0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmajurski",
"html_url": "https://github.com/mmajurski",
"followers_url": "https://api.github.com/users/mmajurski/followers",
"following_url": "https://api.github.com/users/mmajurski/following{/other_user}",
"gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions",
"organizations_url": "https://api.github.com/users/mmajurski/orgs",
"repos_url": "https://api.github.com/users/mmajurski/repos",
"events_url": "https://api.github.com/users/mmajurski/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmajurski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I think this is a duplicate of [#3190](https://github.com/huggingface/datasets/issues/3190)?",
"Upgrading the datasets version as per #3190 fixes this bug. \r\nI'm Marking as closed."
] | 2021-12-20T16:16:49 | 2021-12-20T16:34:57 | 2021-12-20T16:34:57 | NONE | null | null | null | ## Describe the bug
When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset.
The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is.
However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner.
https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter
Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation.
I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices.
## Steps to reproduce the bug
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print("initial 10 elements")
print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
print("filtered 10 elements looking for label 0")
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1]
```
## Actual results
```
$ python indices_bug.py
initial 10 elements
[1, 1, 0, 1, 0, 0, 0, 1, 0, 0]
filtered 10 elements looking for label 0
[1, 1, 1, 1, 1, 1]
```
This code block first shuffles the dataset (to get a mix of label 0 and label 1).
Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset.
Finally, a filter is applied to pull out just the elements with `label == 0`.
The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter.
In this case I have 2, shuffle and subset.
If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up.
The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results.
```python
dataset = load_dataset('imdb', split='train', keep_in_memory=True)
dataset = dataset.shuffle(keep_in_memory=True)
dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True)
dataset = dataset.select(range(0, 10), keep_in_memory=True)
print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
## Expected results
In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set.
If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected.
## Environment info
Here are the commands required to rebuild the conda environment from scratch.
```
# create a virtual environment
conda create -n dataset_indices python=3.8 -y
# activate the virtual environment
conda activate dataset_indices
# install huggingface datasets
conda install datasets
```
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 3.0.0
### Full Conda Environment
```
$ conda env export
name: dasaset_indices
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- abseil-cpp=20210324.2=h2531618_0
- aiohttp=3.8.1=py38h7f8727e_0
- aiosignal=1.2.0=pyhd3eb1b0_0
- arrow-cpp=3.0.0=py38h6b21186_4
- attrs=21.2.0=pyhd3eb1b0_0
- aws-c-common=0.4.57=he6710b0_1
- aws-c-event-stream=0.1.6=h2531618_5
- aws-checksums=0.1.9=he6710b0_0
- aws-sdk-cpp=1.8.185=hce553d0_0
- bcj-cffi=0.5.1=py38h295c915_0
- blas=1.0=mkl
- boost-cpp=1.73.0=h27cfd23_11
- bottleneck=1.3.2=py38heb32a55_1
- brotli=1.0.9=he6710b0_2
- brotli-python=1.0.9=py38heb0550a_2
- brotlicffi=1.0.9.2=py38h295c915_0
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- c-ares=1.17.1=h27cfd23_0
- ca-certificates=2021.10.26=h06a4308_2
- certifi=2021.10.8=py38h06a4308_0
- cffi=1.14.6=py38h400218f_0
- conllu=4.4.1=pyhd3eb1b0_0
- cryptography=36.0.0=py38h9ce1e76_0
- dataclasses=0.8=pyh6d0b6a4_7
- dill=0.3.4=pyhd3eb1b0_0
- double-conversion=3.1.5=he6710b0_1
- et_xmlfile=1.1.0=py38h06a4308_0
- filelock=3.4.0=pyhd3eb1b0_0
- frozenlist=1.2.0=py38h7f8727e_0
- gflags=2.2.2=he6710b0_0
- glog=0.5.0=h2531618_0
- gmp=6.2.1=h2531618_2
- grpc-cpp=1.39.0=hae934f6_5
- huggingface_hub=0.0.17=pyhd3eb1b0_0
- icu=58.2=he6710b0_3
- idna=3.3=pyhd3eb1b0_0
- importlib-metadata=4.8.2=py38h06a4308_0
- importlib_metadata=4.8.2=hd3eb1b0_0
- intel-openmp=2021.4.0=h06a4308_3561
- krb5=1.19.2=hac12032_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libboost=1.73.0=h3ff78a5_11
- libcurl=7.80.0=h0b77cf5_0
- libedit=3.1.20210910=h7f8727e_0
- libev=4.33=h7f8727e_1
- libevent=2.1.8=h1ba5d50_1
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgomp=9.3.0=h5101ec6_17
- libnghttp2=1.46.0=hce63b2e_0
- libprotobuf=3.17.2=h4ff587b_1
- libssh2=1.9.0=h1ba5d50_1
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libthrift=0.14.2=hcc01f38_0
- libxml2=2.9.12=h03d6c58_0
- libxslt=1.1.34=hc22bd24_0
- lxml=4.6.3=py38h9120a33_0
- lz4-c=1.9.3=h295c915_1
- mkl=2021.4.0=h06a4308_640
- mkl-service=2.4.0=py38h7f8727e_0
- mkl_fft=1.3.1=py38hd3c417c_0
- mkl_random=1.2.2=py38h51133e4_0
- multiprocess=0.70.12.2=py38h7f8727e_0
- multivolumefile=0.2.3=pyhd3eb1b0_0
- ncurses=6.3=h7f8727e_2
- numexpr=2.7.3=py38h22e1b3c_1
- numpy=1.21.2=py38h20f2e39_0
- numpy-base=1.21.2=py38h79a1101_0
- openpyxl=3.0.9=pyhd3eb1b0_0
- openssl=1.1.1l=h7f8727e_0
- orc=1.6.9=ha97a36c_3
- packaging=21.3=pyhd3eb1b0_0
- pip=21.2.4=py38h06a4308_0
- py7zr=0.16.1=pyhd3eb1b0_1
- pycparser=2.21=pyhd3eb1b0_0
- pycryptodomex=3.10.1=py38h27cfd23_1
- pyopenssl=21.0.0=pyhd3eb1b0_1
- pyparsing=3.0.4=pyhd3eb1b0_0
- pyppmd=0.16.1=py38h295c915_0
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.12=h12debd9_0
- python-dateutil=2.8.2=pyhd3eb1b0_0
- python-xxhash=2.0.2=py38h7f8727e_0
- pyzstd=0.14.4=py38h7f8727e_3
- re2=2020.11.01=h2531618_1
- readline=8.1=h27cfd23_0
- requests=2.26.0=pyhd3eb1b0_0
- setuptools=58.0.4=py38h06a4308_0
- six=1.16.0=pyhd3eb1b0_0
- snappy=1.1.8=he6710b0_0
- sqlite=3.36.0=hc218d9a_0
- texttable=1.6.4=pyhd3eb1b0_0
- tk=8.6.11=h1ccaba5_0
- typing_extensions=3.10.0.2=pyh06a4308_0
- uriparser=0.9.3=he6710b0_1
- utf8proc=2.6.1=h27cfd23_0
- wheel=0.37.0=pyhd3eb1b0_1
- xxhash=0.8.0=h7f8727e_3
- xz=5.2.5=h7b6447c_0
- zipp=3.6.0=pyhd3eb1b0_0
- zlib=1.2.11=h7f8727e_4
- zstd=1.4.9=haebb681_0
- pip:
- async-timeout==4.0.2
- charset-normalizer==2.0.9
- datasets==1.16.1
- fsspec==2021.11.1
- huggingface-hub==0.2.1
- multidict==5.2.0
- pandas==1.3.5
- pyarrow==6.0.1
- pytz==2021.3
- pyyaml==6.0
- tqdm==4.62.3
- typing-extensions==4.0.1
- urllib3==1.26.7
- yarl==1.7.2
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3459/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3458/comments | https://api.github.com/repos/huggingface/datasets/issues/3458/events | https://github.com/huggingface/datasets/pull/3458 | 1,084,926,025 | PR_kwDODunzps4wFiRb | 3,458 | Fix duplicated tag in wikicorpus dataset card | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI is failing just because of empty sections - merging"
] | 2021-12-20T15:34:16 | 2021-12-20T16:03:25 | 2021-12-20T16:03:24 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3458",
"html_url": "https://github.com/huggingface/datasets/pull/3458",
"diff_url": "https://github.com/huggingface/datasets/pull/3458.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3458.patch",
"merged_at": "2021-12-20T16:03:24"
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3458/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3457/comments | https://api.github.com/repos/huggingface/datasets/issues/3457/events | https://github.com/huggingface/datasets/issues/3457 | 1,084,862,121 | I_kwDODunzps5Aqa6p | 3,457 | Add CMU Graphics Lab Motion Capture dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [
"This dataset has files in ASF/AMC format. [ The skeleton file is the ASF file (Acclaim Skeleton File). The motion file is the AMC file (Acclaim Motion Capture data). ] \r\n\r\nSome questions : \r\n1. How do we go about representing these features using datasets.Features and generate examples ?\r\n2. The dataset download link for ASF/AMC files does not have metadata information, for eg : category and subcategory information. We will need to crawl the website for this information. The authors mention \"Please don't crawl this database for all motions.\" Can we mail the authors for this information ?\r\nThe dataset structure is as follows : \r\n```\r\nsubjects\r\n\t- 01\r\n\t\t- 01_01.amc\r\n\t\t- 01_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 01.asf\r\n\t- 02\r\n\t\t- 02_01.amc\r\n\t\t- 02_02.amc\r\n\t\t.\r\n\t\t.\r\n\t\t.\r\n\t\t- 02.asf\r\n```\r\nThere is no metadata regarding the category, sub-category and motion description.\r\n\r\nNeed your inputs. @mariosasko / @lhoestq \r\nThank you.\r\n",
"Hi @dnaveenr! Thanks for working on this!\r\n\r\n1. We can use the `Sequence(Value(\"string\"))` feature type for the subject's AMC files and `Value(\"string\")` for the subject's ASF file (`Value(\"string\")` represents the file paths) + the types for categories/subcategories and descriptions.\r\n2. We can use this URL to download the motion descriptions: http://mocap.cs.cmu.edu/search.php?subjectnumber=<subject_number>&motion=%%%&maincat=%&subcat=%&subtext=yes where `subject_number` is the number between 1 and 144. And to get categories/subcategories, feel free to contact the authors (they state in the FAQ they are happy to help) and ask them if they can provide the mapping from categories/subcategories to the AMC files to avoid crawling. You can also mention that your goal is to make their dataset more accessible by adding its loading script to the Hub.\r\n\r\nThe AMC files are also available in the tvd, c3d, mpg and avi formats (the links are in the [FAQ](http://mocap.cs.cmu.edu/faqs.php) section), so it would be nice to have one config for each of these additional formats. \r\n\r\nAnd additionally, we can add a `Data Preprocessing` section to the card where we explain how to load/process the files. I can help with that.",
"Hi @mariosasko ,\r\n\r\n1. Thanks for this, so we can add the file paths.\r\n2. Yes, I had already mailed the authors a couple of days back actually, asking for the metadata details[ i.e category, sub-category and motion description] . They are yet to respond though, I will wait for a couple of days and try to follow up with them again. :) Else we can use the workaround solution.\r\n\r\nYes. Supporting all the formats would be helpful. \r\n\r\n> And additionally, we can add a Data Preprocessing section to the card where we explain how to load/process the files. I can help with that.\r\n\r\nOkay. Got it."
] | 2021-12-20T14:34:39 | 2022-03-16T16:53:09 | null | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** CMU Graphics Lab Motion Capture database
- **Description:** The database contains free motions which you can download and use.
- **Data:** http://mocap.cs.cmu.edu/
- **Motivation:** Nice motion capture dataset
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3457/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3456/comments | https://api.github.com/repos/huggingface/datasets/issues/3456/events | https://github.com/huggingface/datasets/pull/3456 | 1,084,687,973 | PR_kwDODunzps4wEwXz | 3,456 | [WER] Better error message for wer | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! I don't think this would solve this issue.\r\nCurrently it looks like there's a bug that converts the list `[\"hello it's nice\"]` to a string `'[\"hello it's nice\"]'` since this is what the metric expects as input. The conversion is done before the data are passed to `_compute()`.\r\n\r\nThis is `Value(\"string\").encode_example` that is called to do the conversion. Since `str()` encoding is too permissive we should consider raising an error if the example is not a string (even though it can be converted to string). ",
"> called\r\n\r\nAh yeah you're right",
"I just opened https://github.com/huggingface/datasets/pull/3460 to fix that. It now raises an error instead of computing the wrong WER",
"Thank you - that should be good enough!"
] | 2021-12-20T11:38:40 | 2021-12-20T16:53:37 | 2021-12-20T16:53:36 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3456",
"html_url": "https://github.com/huggingface/datasets/pull/3456",
"diff_url": "https://github.com/huggingface/datasets/pull/3456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3456.patch",
"merged_at": null
} | Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following:
```python
from datasets import load_metric
wer = load_metric("wer")
target_str = ["hello this is nice", "hello the weather is bloomy"]
pred_str = [["hello it's nice"], ["hello it's the weather"]]
print("Wrong:", wer.compute(predictions=pred_str, references=target_str))
print("Correct", wer.compute(predictions=[x[0] for x in pred_str], references=target_str))
```
We get:
```
Wrong: 1.0
Correct 0.5555555555555556
```
meaning that we get a word-error rate for incorrectly passed input formats. We should raise an error here instead so that people don't spent hours fixing a model while it's their incorrect evaluation metric is the problem for a low WER. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3456/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3455/comments | https://api.github.com/repos/huggingface/datasets/issues/3455/events | https://github.com/huggingface/datasets/issues/3455 | 1,084,599,650 | I_kwDODunzps5Apa1i | 3,455 | Easier information editing | {
"login": "borgr",
"id": 6416600,
"node_id": "MDQ6VXNlcjY0MTY2MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borgr",
"html_url": "https://github.com/borgr",
"followers_url": "https://api.github.com/users/borgr/followers",
"following_url": "https://api.github.com/users/borgr/following{/other_user}",
"gists_url": "https://api.github.com/users/borgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borgr/subscriptions",
"organizations_url": "https://api.github.com/users/borgr/orgs",
"repos_url": "https://api.github.com/users/borgr/repos",
"events_url": "https://api.github.com/users/borgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/borgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"Hi ! I guess you are talking about the dataset cards that are in this repository on github ?\r\n\r\nI think github allows to submit a PR even for 1 line though the `Edit file` button on the page of the dataset card.\r\n\r\nMaybe let's mention this in `CONTRIBUTING.md` ?"
] | 2021-12-20T10:10:43 | 2021-12-20T14:48:59 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
It requires a lot of effort to improve a datasheet.
**Describe the solution you'd like**
UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branching, makefile etc.)
**Describe alternatives you've considered**
The current Ux is to have the 8 steps for contribution while One just wishes to change a line a type etc.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3455/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3454/comments | https://api.github.com/repos/huggingface/datasets/issues/3454/events | https://github.com/huggingface/datasets/pull/3454 | 1,084,519,107 | PR_kwDODunzps4wENam | 3,454 | Fix iter_archive generator | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2021-12-20T08:50:15 | 2021-12-20T10:05:00 | 2021-12-20T10:04:59 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3454",
"html_url": "https://github.com/huggingface/datasets/pull/3454",
"diff_url": "https://github.com/huggingface/datasets/pull/3454.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3454.patch",
"merged_at": "2021-12-20T10:04:59"
} | This PR:
- Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs
- Fixes bugs in `iter_archive` introduced in:
- #3443
Fix #3453. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3454/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3453/comments | https://api.github.com/repos/huggingface/datasets/issues/3453/events | https://github.com/huggingface/datasets/issues/3453 | 1,084,515,911 | I_kwDODunzps5ApGZH | 3,453 | ValueError while iter_archive | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2021-12-20T08:46:18 | 2021-12-20T10:04:59 | 2021-12-20T10:04:59 | MEMBER | null | null | null | ## Describe the bug
After the merge of:
- #3443
the method `iter_archive` throws a ValueError:
```
ValueError: read of closed file
```
## Steps to reproduce the bug
```python
for path, file in dl_manager.iter_archive(archive_path):
pass
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3453/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3452/comments | https://api.github.com/repos/huggingface/datasets/issues/3452/events | https://github.com/huggingface/datasets/issues/3452 | 1,083,803,178 | I_kwDODunzps5AmYYq | 3,452 | why the stratify option is omitted from test_train_split function? | {
"login": "j-sieger",
"id": 9985334,
"node_id": "MDQ6VXNlcjk5ODUzMzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9985334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/j-sieger",
"html_url": "https://github.com/j-sieger",
"followers_url": "https://api.github.com/users/j-sieger/followers",
"following_url": "https://api.github.com/users/j-sieger/following{/other_user}",
"gists_url": "https://api.github.com/users/j-sieger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/j-sieger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/j-sieger/subscriptions",
"organizations_url": "https://api.github.com/users/j-sieger/orgs",
"repos_url": "https://api.github.com/users/j-sieger/repos",
"events_url": "https://api.github.com/users/j-sieger/events{/privacy}",
"received_events_url": "https://api.github.com/users/j-sieger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] | closed | false | null | [] | null | [
"Hi ! It's simply not added yet :)\r\n\r\nIf someone wants to contribute to add the `stratify` parameter I'd be happy to give some pointers.\r\n\r\nIn the meantime, I guess you can use `sklearn` or other tools to do a stratified train/test split over the **indices** of your dataset and then do\r\n```\r\ntrain_dataset = dataset.select(train_indices)\r\ntest_dataset = dataset.select(test_indices)\r\n```",
"Hi @lhoestq I would like to add `stratify` parameter, can you give me some pointers for adding the same ?",
"Hi ! Sure :)\r\n\r\nThe `train_test_split` method is defined here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3253-L3253\r\n\r\nand inside `train_test_split ` we need to create the right `train_indices` and `test_indices` that are passed here to `.select()`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3450-L3464\r\n\r\nFor example if your dataset is like\r\n| | label |\r\n|---:|--------:|\r\n| 0 | 1 |\r\n| 1 | 1 |\r\n| 2 | 0 |\r\n| 3 | 0 |\r\n\r\nand the user passes `stratify=dataset[\"label\"]`, then you should get indices that look like this\r\n```\r\ntrain_indices = [0, 2]\r\ntest_indices = [1, 3]\r\n```\r\n\r\nthese indices will be passed to `.select` to return the stratified train and test splits :)\r\n\r\nFeel free to รฎng me if you have any question !",
"@lhoestq \r\nI just added the implementation for `stratify` option here #4322 "
] | 2021-12-18T10:37:47 | 2022-05-25T20:43:51 | 2022-05-25T20:43:51 | NONE | null | null | null | why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3452/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3452/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3451/comments | https://api.github.com/repos/huggingface/datasets/issues/3451/events | https://github.com/huggingface/datasets/pull/3451 | 1,083,459,137 | PR_kwDODunzps4wA5LP | 3,451 | [Staging] Update dataset repos automatically on the Hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"do keep us updated on how it's going in staging! cc @SBrandeis ",
"Sure ! For now it works smoothly. We'll also do a new release today.\r\n\r\nI can send you some repos to explore on staging, in case you want to see how they look like after being updated.\r\nFor example [swahili_news](https://moon-staging.huggingface.co/datasets/swahili_news/tree/main)"
] | 2021-12-17T17:12:11 | 2021-12-21T10:25:46 | 2021-12-20T14:09:51 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3451",
"html_url": "https://github.com/huggingface/datasets/pull/3451",
"diff_url": "https://github.com/huggingface/datasets/pull/3451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3451.patch",
"merged_at": "2021-12-20T14:09:51"
} | Let's have a script that updates the dataset repositories on staging for now. This way we can make sure it works fine before going in prod.
Related to https://github.com/huggingface/datasets/issues/3341
The script runs on each commit on `master`. It checks the datasets that were changed, and it pushes the changes to the corresponding repositories on the Hub.
If there's a new dataset, then a new repository is created.
If the commit is a new release of `datasets`, it also pushes the tag to all the repositories. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3451/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3451/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3450/comments | https://api.github.com/repos/huggingface/datasets/issues/3450/events | https://github.com/huggingface/datasets/issues/3450 | 1,083,450,158 | I_kwDODunzps5AlCMu | 3,450 | Unexpected behavior doing Split + Filter | {
"login": "jbrachat",
"id": 26432605,
"node_id": "MDQ6VXNlcjI2NDMyNjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/26432605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbrachat",
"html_url": "https://github.com/jbrachat",
"followers_url": "https://api.github.com/users/jbrachat/followers",
"following_url": "https://api.github.com/users/jbrachat/following{/other_user}",
"gists_url": "https://api.github.com/users/jbrachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbrachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbrachat/subscriptions",
"organizations_url": "https://api.github.com/users/jbrachat/orgs",
"repos_url": "https://api.github.com/users/jbrachat/repos",
"events_url": "https://api.github.com/users/jbrachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbrachat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! This is an issue with `datasets` 1.12. Sorry for the inconvenience. Can you update to `>=1.13` ?\r\nsee https://github.com/huggingface/datasets/issues/3190\r\n\r\nMaybe we should also backport the bug fix to `1.12` (in a new version `1.12.2`)"
] | 2021-12-17T17:00:39 | 2021-12-20T14:51:37 | null | NONE | null | null | null | ## Describe the bug
I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter')
## Steps to reproduce the bug
```
from datasets import Dataset
import pandas as pd
dic = {'x': [1,2,3,4,5,6,7,8,9], 'y':['q','w','e','r','t','y','u','i','o']}
df = pd.DataFrame.from_dict(dic)
dataset = Dataset.from_pandas(df)
split_dataset = dataset.train_test_split(test_size=0.5, shuffle=False, seed=42)
train_dataset = split_dataset["train"]
eval_dataset = split_dataset["test"]
eval_dataset_2 = eval_dataset.filter(lambda example: example['x'] % 2 == 0)
print( eval_dataset['x'])
print(eval_dataset_2['x'])
```
One observes that elements in eval_dataset2 are actually coming from the training dataset...
## Expected results
The expected results would be that the filtered eval dataset would only contain elements from the original eval dataset.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows 10
- Python version: 3.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3450/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3449/comments | https://api.github.com/repos/huggingface/datasets/issues/3449/events | https://github.com/huggingface/datasets/issues/3449 | 1,083,373,018 | I_kwDODunzps5AkvXa | 3,449 | Add `__add__()`, `__iadd__()` and similar to `Dataset` class | {
"login": "sgraaf",
"id": 8904453,
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgraaf",
"html_url": "https://github.com/sgraaf",
"followers_url": "https://api.github.com/users/sgraaf/followers",
"following_url": "https://api.github.com/users/sgraaf/following{/other_user}",
"gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions",
"organizations_url": "https://api.github.com/users/sgraaf/orgs",
"repos_url": "https://api.github.com/users/sgraaf/repos",
"events_url": "https://api.github.com/users/sgraaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgraaf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"I was going through the codebase, and I believe the implementation of __add__() and __iadd__() will be similar to concatenate_datasets() after the elimination of code for arguments other than the list of datasets (info, split, axis). \r\n(Assuming elimination of axis means concatenating over axis 1.)"
] | 2021-12-17T15:29:11 | 2021-12-24T11:06:10 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
No.
**Describe the solution you'd like**
I would like to be able to concatenate datasets as follows:
```python
>>> dataset["train"] += dataset["validation"]
```
... instead of using `concatenate_datasets()`:
```python
>>> raw_datasets["train"] = concatenate_datasets([raw_datasets["train"], raw_datasets["validation"]])
>>> del raw_datasets["validation"]
```
**Describe alternatives you've considered**
Well, I have considered `concatenate_datasets()` ๐
**Additional context**
N.a.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3449/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3448/comments | https://api.github.com/repos/huggingface/datasets/issues/3448/events | https://github.com/huggingface/datasets/issues/3448 | 1,083,231,080 | I_kwDODunzps5AkMto | 3,448 | JSONDecodeError with HuggingFace dataset viewer | {
"login": "kathrynchapman",
"id": 57716109,
"node_id": "MDQ6VXNlcjU3NzE2MTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/57716109?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kathrynchapman",
"html_url": "https://github.com/kathrynchapman",
"followers_url": "https://api.github.com/users/kathrynchapman/followers",
"following_url": "https://api.github.com/users/kathrynchapman/following{/other_user}",
"gists_url": "https://api.github.com/users/kathrynchapman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kathrynchapman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kathrynchapman/subscriptions",
"organizations_url": "https://api.github.com/users/kathrynchapman/orgs",
"repos_url": "https://api.github.com/users/kathrynchapman/repos",
"events_url": "https://api.github.com/users/kathrynchapman/events{/privacy}",
"received_events_url": "https://api.github.com/users/kathrynchapman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi ! I think the issue comes from the dataset_infos.json file: it has the \"flat\" field twice.\r\n\r\nCan you try deleting this file and regenerating it please ?",
"Thanks! That fixed that, but now I am getting:\r\nServer Error\r\nStatus code: 400\r\nException: KeyError\r\nMessage: 'feature'\r\n\r\nI checked the dataset_infos.json and pubmed_neg.py script, I don't use 'feature' anywhere as a key. Is the dataset viewer expecting that I do?",
"It seems that the `feature` key is missing from some feature type definition in your dataset_infos.json:\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t}\r\n```\r\nThey should be\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n \"feature\": {\"dtype\": \"string\", \"id\": null, \"_type\": \"Value\"}\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\",\r\n \"feature\": {\"num_classes\": 5, \"names\": [\"-\", \"S\", \"H\", \"N\", \"C\"], \"names_file\": null, \"id\": null, \"_type\": \"ClassLabel\"}\r\n\t\t\t}\r\n```\r\n\r\nNote that you can generate the dataset_infos.json automatically to avoid mistakes:\r\n```bash\r\ndatasets-cli test ./path/to/dataset --save_infos\r\n```"
] | 2021-12-17T12:52:41 | 2022-02-24T09:10:26 | 2022-02-24T09:10:26 | NONE | null | null | null | ## Dataset viewer issue for 'pubmed_neg'
**Link:** https://huggingface.co/datasets/IGESML/pubmed_neg
I am getting the error:
Status code: 400
Exception: JSONDecodeError
Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202)
I have checked all files - I am not using single quotes anywhere. Not sure what is causing this issue.
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3448/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3447/comments | https://api.github.com/repos/huggingface/datasets/issues/3447/events | https://github.com/huggingface/datasets/issues/3447 | 1,082,539,790 | I_kwDODunzps5Ahj8O | 3,447 | HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading | {
"login": "dunalduck0",
"id": 51274745,
"node_id": "MDQ6VXNlcjUxMjc0NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/51274745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dunalduck0",
"html_url": "https://github.com/dunalduck0",
"followers_url": "https://api.github.com/users/dunalduck0/followers",
"following_url": "https://api.github.com/users/dunalduck0/following{/other_user}",
"gists_url": "https://api.github.com/users/dunalduck0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dunalduck0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dunalduck0/subscriptions",
"organizations_url": "https://api.github.com/users/dunalduck0/orgs",
"repos_url": "https://api.github.com/users/dunalduck0/repos",
"events_url": "https://api.github.com/users/dunalduck0/events{/privacy}",
"received_events_url": "https://api.github.com/users/dunalduck0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Indeed it says \"downloading and preparing\" but in your case it didn't need to download anything since you used local files (it would have thrown an error otherwise). I think we can improve the logging to make it clearer in this case",
"@lhoestq Thank you for explaining. I am sorry but I was not clear about my intention. I didn't want to kill internet traffic; I wanted to kill all write activity. In other words, you can imagine that my storage has only read access but crashes on write.\r\n\r\nWhen run_clm.py is invoked with the same parameters, the hash in the cache directory \"datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/...\" doesn't change, and my job can load cached data properly. This is great.\r\n\r\nUnfortunately, when params change (which happens sometimes), the hash changes and the old cache is invalid. datasets builder would create a new cache directory with the new hash and create JSON builder there, even though every JSON builder is the same. I didn't find a way to avoid such behavior.\r\n\r\nThis problem can be resolved when using datasets.map() for tokenizing and grouping text. This function allows me to specify output filenames with --cache_file_names, so that the cached files are always valid.\r\n\r\nThis is the code that I used to freeze cache filenames for tokenization. I wish I could do the same to datasets.load_dataset()\r\n```\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n cache_file_names={k: os.path.join(model_args.cache_dir, f'{k}-tokenized') for k in raw_datasets},\r\n )\r\n```",
"Hi ! `load_dataset` may re-generate your dataset if some parameters changed indeed. If you want to freeze a dataset loaded with `load_dataset`, I think the best solution is just to save it somewhere on your disk with `.save_to_disk(my_dataset_dir)` and reload it with `load_from_disk(my_dataset_dir)`. This way you will be able to reload the dataset without having to run `load_dataset`"
] | 2021-12-16T18:51:13 | 2022-02-17T14:16:27 | 2022-02-17T14:16:27 | NONE | null | null | null | ## Describe the bug
According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON, despite I have run the program once and cached all data into the same --cache_dir.
"Downloading" is not an issue when running with local disk, but crashes often with cloud storage because (1) multiply GPU processes try to access the same file, AND (2) FileLocker fails to synchronize all processes, due to storage throttling. 99% of times, when the main process releases FileLocker, the file is not actually ready for access in cloud storage and thus triggers "FileNotFound" errors for all other processes. Well, another way to resolve the problem is to investigate super reliable cloud storage, but that's out of scope here.
## Steps to reproduce the bug
```
export HF_DATASETS_OFFLINE=1
python run_clm.py --model_name_or_path=models/gpt-j-6B --train_file=trainpy.v2.train.json --validation_file=trainpy.v2.eval.json --cache_dir=datacache/trainpy.v2
```
## Expected results
datasets should stop all "downloading" behavior but reuse the cached JSON configuration. I think the problem here is part of the cache directory path, "default-471372bed4b51b53", is randomly generated, and it could change if some parameters changed. And I didn't find a way to use a fixed path to ensure datasets to reuse cached data every time.
## Actual results
The logging shows datasets are still downloading into "datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426".
```
12/16/2021 10:25:59 - WARNING - datasets.builder - Using custom data configuration default-471372bed4b51b53
12/16/2021 10:25:59 - INFO - datasets.builder - Generating dataset json (datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426)
Downloading and preparing dataset json/default to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 17623.13it/s]
12/16/2021 10:25:59 - INFO - datasets.utils.download_manager - Downloading took 0.0 min
12/16/2021 10:26:00 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 1206.99it/s]
12/16/2021 10:26:00 - INFO - datasets.utils.info_utils - Unable to verify checksums.
12/16/2021 10:26:00 - INFO - datasets.builder - Generating split train
12/16/2021 10:26:01 - INFO - datasets.builder - Generating split validation
12/16/2021 10:26:02 - INFO - datasets.utils.info_utils - Unable to verify splits sizes.
Dataset json downloaded and prepared to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426. Subsequent calls will reuse this data.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 53.54it/s]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux
- Python version: 3.8.10
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3447/timeline | null | completed | false |