url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.79B
| node_id
stringlengths 18
32
| number
int64 1
6.01k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | comments
sequence | created_at
int64 1,587B
1,689B
| updated_at
int64 1,588B
1,689B
| closed_at
int64 1,587B
1,689B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4668/comments | https://api.github.com/repos/huggingface/datasets/issues/4668/events | https://github.com/huggingface/datasets/issues/4668 | 1,299,735,893 | I_kwDODunzps5NeGVV | 4,668 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | {
"login": "hungnmai",
"id": 21364546,
"node_id": "MDQ6VXNlcjIxMzY0NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21364546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hungnmai",
"html_url": "https://github.com/hungnmai",
"followers_url": "https://api.github.com/users/hungnmai/followers",
"following_url": "https://api.github.com/users/hungnmai/following{/other_user}",
"gists_url": "https://api.github.com/users/hungnmai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hungnmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hungnmai/subscriptions",
"organizations_url": "https://api.github.com/users/hungnmai/orgs",
"repos_url": "https://api.github.com/users/hungnmai/repos",
"events_url": "https://api.github.com/users/hungnmai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hungnmai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | [
"It seems like a private dataset. The viewer is currently not supported on the private datasets."
] | 1,657,389,853,000 | 1,657,525,667,000 | 1,657,525,667,000 | NONE | null | ### Link
https://huggingface.co/hungnm/multilingual-amazon-review-sentiment
### Description
_No response_
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4668/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4667/comments | https://api.github.com/repos/huggingface/datasets/issues/4667/events | https://github.com/huggingface/datasets/issues/4667 | 1,299,735,703 | I_kwDODunzps5NeGSX | 4,667 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | {
"login": "hungnmai",
"id": 21364546,
"node_id": "MDQ6VXNlcjIxMzY0NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/21364546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hungnmai",
"html_url": "https://github.com/hungnmai",
"followers_url": "https://api.github.com/users/hungnmai/followers",
"following_url": "https://api.github.com/users/hungnmai/following{/other_user}",
"gists_url": "https://api.github.com/users/hungnmai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hungnmai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hungnmai/subscriptions",
"organizations_url": "https://api.github.com/users/hungnmai/orgs",
"repos_url": "https://api.github.com/users/hungnmai/repos",
"events_url": "https://api.github.com/users/hungnmai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hungnmai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,657,389,795,000 | 1,657,525,635,000 | 1,657,525,635,000 | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4667/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4666/comments | https://api.github.com/repos/huggingface/datasets/issues/4666/events | https://github.com/huggingface/datasets/issues/4666 | 1,299,732,238 | I_kwDODunzps5NeFcO | 4,666 | Issues with concatenating datasets | {
"login": "ChenghaoMou",
"id": 32014649,
"node_id": "MDQ6VXNlcjMyMDE0NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenghaoMou",
"html_url": "https://github.com/ChenghaoMou",
"followers_url": "https://api.github.com/users/ChenghaoMou/followers",
"following_url": "https://api.github.com/users/ChenghaoMou/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenghaoMou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenghaoMou/subscriptions",
"organizations_url": "https://api.github.com/users/ChenghaoMou/orgs",
"repos_url": "https://api.github.com/users/ChenghaoMou/repos",
"events_url": "https://api.github.com/users/ChenghaoMou/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenghaoMou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [
"Hi! I agree we should improve the features equality checks to account for this particular case. However, your code fails due to `answer_start` having the dtype `int64` instead of `int32` after loading from JSON (it's not possible to embed type precision info into a JSON file; `save_to_disk` does that for arrow files), which would lead to the concatenation error as PyArrow does not support this sort of type promotion. This can be fixed as follows:\r\n```python\r\ntemp = load_dataset(\"json\", data_files={\"train\": \"output.jsonl\"}, features=squad[\"train\"].features)\r\n``` ",
"That makes sense. I totally missed the `int64` and `int32` part. Thanks for pointing it out! Will close this issue for now."
] | 1,657,388,714,000 | 1,657,646,175,000 | 1,657,646,174,000 | NONE | null | ## Describe the bug
It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted.
> A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence) with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you don’t want this behavior, you can use a python list instead of the [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence).
## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_dataset
squad = load_dataset("squad_v2")
squad["train"].to_json("output.jsonl", lines=True)
temp = load_dataset("json", data_files={"train": "output.jsonl"})
concatenate_datasets([temp["train"], squad["train"]])
```
## Expected results
No error executing that code
## Actual results
```
ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)} or Value("null").
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.8.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4666/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4666/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4665/comments | https://api.github.com/repos/huggingface/datasets/issues/4665/events | https://github.com/huggingface/datasets/issues/4665 | 1,299,652,638 | I_kwDODunzps5NdyAe | 4,665 | Unable to create dataset having Python dataset script only | {
"login": "aleSuglia",
"id": 1479733,
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleSuglia",
"html_url": "https://github.com/aleSuglia",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @aleSuglia, thanks for reporting.\r\n\r\nWe are having a look at it. \r\n\r\nWe transfer this issue to the Community tab of the corresponding Hub dataset: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/discussions"
] | 1,657,367,146,000 | 1,657,523,409,000 | 1,657,523,401,000 | CONTRIBUTOR | null | ## Describe the bug
Hi there,
I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/
I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo already):
```
datasets-cli test Heriot-WattUniversity/dialog-babi/dialog_babi.py --save_infos --all-configs
```
while it errors when I remove the python script:
```
datasets-cli test Heriot-WattUniversity/dialog-babi/ --save_infos --all-configs
```
The error message is the following:
```
FileNotFoundError: Unable to resolve any data file that matches '['**']' at /Users/as2180/workspace/Heriot-WattUniversity/dialog-babi with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4665/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4664/comments | https://api.github.com/repos/huggingface/datasets/issues/4664/events | https://github.com/huggingface/datasets/pull/4664 | 1,299,571,212 | PR_kwDODunzps47IvfG | 4,664 | Add stanford dog dataset | {
"login": "khushmeeet",
"id": 8711912,
"node_id": "MDQ6VXNlcjg3MTE5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khushmeeet",
"html_url": "https://github.com/khushmeeet",
"followers_url": "https://api.github.com/users/khushmeeet/followers",
"following_url": "https://api.github.com/users/khushmeeet/following{/other_user}",
"gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions",
"organizations_url": "https://api.github.com/users/khushmeeet/orgs",
"repos_url": "https://api.github.com/users/khushmeeet/repos",
"events_url": "https://api.github.com/users/khushmeeet/events{/privacy}",
"received_events_url": "https://api.github.com/users/khushmeeet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @khushmeeet, thanks for your contribution.\r\n\r\nBut wouldn't it be better to add this dataset to the Hub? \r\n- https://huggingface.co/docs/datasets/share\r\n- https://huggingface.co/docs/datasets/dataset_script",
"Hi @albertvillanova \r\n\r\nDataset is added to Hub - https://huggingface.co/datasets/dgrnd4/stanford_dog_dataset",
"Great, so I guess we can close this issue, as the dataset is already available on the Hub.",
"OK I read the discussion on:\r\n- #4504\r\n\r\nCurrently, priority is adding datasets to the Hub, not here on GitHub.\r\n\r\nIf you would like to contribute the loading script and all the metadata you generated (README + JSON files), you could:\r\n- Either make a PR to the existing dataset on the Hub\r\n- Create a new dataset on the Hub:\r\n - Either under your personal namespace\r\n - or even more professionally, under the namespace `stanfordSVL` (Stanford Vision and Learning Lab: https://svl.stanford.edu/)\r\n\r\nYou can use the Community tab to ping us if you need help or have any questions."
] | 1,657,341,967,000 | 1,657,891,832,000 | 1,657,890,942,000 | CONTRIBUTOR | null | This PR is for adding dataset, related to issue #4504.
We are adding Stanford dog breed dataset. It is a multi class image classification dataset.
Details can be found here - http://vision.stanford.edu/aditya86/ImageNetDogs/
Tests on dummy data is failing currently, which I am looking into. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4664/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4664",
"html_url": "https://github.com/huggingface/datasets/pull/4664",
"diff_url": "https://github.com/huggingface/datasets/pull/4664.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4664.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4663/comments | https://api.github.com/repos/huggingface/datasets/issues/4663/events | https://github.com/huggingface/datasets/pull/4663 | 1,299,298,693 | PR_kwDODunzps47H19n | 4,663 | Add text decorators | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657,302,708,000 | 1,658,169,194,000 | 1,658,168,449,000 | MEMBER | null | This PR adds some decoration to text about different modalities to make it more obvious separate guides exist for audio, vision, and text. The goal is to make it easier for users to discover these guides!
![underline](https://user-images.githubusercontent.com/59462357/178044392-9596693e-9a4a-479a-a282-f1edbd90be1a.png)
TODO:
- [x] Open PR to support new Tailwind classes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4663/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4663",
"html_url": "https://github.com/huggingface/datasets/pull/4663",
"diff_url": "https://github.com/huggingface/datasets/pull/4663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4663.patch",
"merged_at": "2022-07-18T18:20:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4662/comments | https://api.github.com/repos/huggingface/datasets/issues/4662/events | https://github.com/huggingface/datasets/pull/4662 | 1,298,845,369 | PR_kwDODunzps47GTEc | 4,662 | Fix: conll2003 - fix empty example | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657,277,353,000 | 1,657,289,693,000 | 1,657,288,962,000 | MEMBER | null | As reported in https://huggingface.co/datasets/conll2003/discussions/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4662/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4662",
"html_url": "https://github.com/huggingface/datasets/pull/4662",
"diff_url": "https://github.com/huggingface/datasets/pull/4662.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4662.patch",
"merged_at": "2022-07-08T14:02:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4661/comments | https://api.github.com/repos/huggingface/datasets/issues/4661/events | https://github.com/huggingface/datasets/issues/4661 | 1,298,374,944 | I_kwDODunzps5NY6Eg | 4,661 | Concurrency bug when using same cache among several jobs | {
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | [
"I can confirm that if I run one job first that processes the dataset, then I can run any jobs in parallel with no problem (no write-concurrency anymore...). ",
"Hi! That's weird. It seems like the error points to the `mkstemp` function, but the official docs state the following:\r\n```\r\nThere are no race conditions in the file’s creation, assuming that the platform properly implements the [os.O_EXCL](https://docs.python.org/3/library/os.html#os.O_EXCL) flag for [os.open()](https://docs.python.org/3/library/os.html#os.open)\r\n```\r\nSo this could mean your platform doesn't support that flag.\r\n\r\n~~Can you please check if wrapping the temp file creation (the line `tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)` in `_map_single`) with the `multiprocess.Lock` fixes the issue?~~\r\nPerhaps wrapping the temp file creation in `_map_single` with `filelock` could work:\r\n```python\r\nwith FileLock(lock_path):\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n```\r\nCan you please check if that helps?"
] | 1,657,245,491,000 | 1,657,905,083,000 | null | NONE | null | ## Describe the bug
I used to see this bug with an older version of the datasets. It seems to persist.
This is my concrete scenario: I launch several evaluation jobs on a cluster in which I share the file system and I share the cache directory used by huggingface libraries. The evaluation jobs read the same *.csv files. If my jobs get all scheduled pretty much at the same time, there are all kinds of weird concurrency errors. Sometime it crashes silently. This time I got lucky that it crashed with a stack trace that I can share and maybe you get to the bottom of this. If you don't have a similar setup available, it may be hard to reproduce as you really need two jobs accessing the same file at the same time to see this type of bug.
## Steps to reproduce the bug
I'm running a modified version of `run_glue.py` script adapted to my use case. I've seen the same problem when running some glue datasets as well (so it's not specific to loading the datasets from csv files).
## Expected results
No crash, concurrent access to the (intermediate) files just fine.
## Actual results
Crashes due to races/concurrency bugs.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 8.0.0
- Pandas version: 1.1.0
Stack trace that I just got with the crash (I've obfuscated some names, it should still be quite informative):
```
Running tokenizer on dataset: 0%| | 0/3 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "../../src/models//run_*******.py", line 600, in <module>
main()
File "../../src/models//run_*******.py", line 444, in main
raw_datasets = raw_datasets.map(
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2376, in map
return self._map_single(
File "/*******/envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 551, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/*******/envs/tr-crt/lib/python3.8/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2776, in _map_single
buf_writer, writer, tmp_file = init_buffer_and_writer()
File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2696, in init_buffer_and_writer
tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(cache_file_name), delete=False)
File "/*******//envs/tr-crt/lib/python3.8/tempfile.py", line 541, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/*******//envs/tr-crt/lib/python3.8/tempfile.py", line 250, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/*******/cache-transformers//transformers/csv/default-ef9cd184210742a7/0.0.0/51cce309a08df9c4d82ffd9363bbe090bf173197fc01a71b034e8594995a1a58/tmps8l6j5yc'
```
As I ran 100s of experiments last year for an empirical paper, I ran into this type of bugs several times. I found several bandaid/work-arounds, e.g., run one job first that caches the dataset => eliminate concurrency; OR use unique caches => eliminate concurrency (but increase storage space), etc. and it all works fine.
I'd like to help you fixing this bug as it's really annoying to always apply the work arounds. Let me know what other info from my side could help you figure out the issue.
Thanks for your help!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4661/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/4661/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4660/comments | https://api.github.com/repos/huggingface/datasets/issues/4660/events | https://github.com/huggingface/datasets/pull/4660 | 1,297,128,387 | PR_kwDODunzps47AYDq | 4,660 | Fix _resolve_single_pattern_locally on Windows with multiple drives | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch ! Sorry I forgot (again) about windows paths when writing this x)"
] | 1,657,187,850,000 | 1,657,213,416,000 | 1,657,212,727,000 | MEMBER | null | Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception:
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\io\parquet.py:35: in __init__
**kwargs,
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\builder.py:287: in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:761: in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:723: in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:321: in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:239: in _resolve_single_pattern_locally
for filepath in glob_iter
C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:242: in <listcomp>
os.path.relpath(filepath, base_path), os.path.relpath(pattern, base_path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\pytest-of-runneradmin\\pytest-0\\popen-gw0\\data6\\dataset.parquet'
start = '/'
...
E ValueError: path is on mount 'C:', start on mount 'D:'
```
This PR makes sure that `base_path` is in the same drive as `pattern`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4660/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4660",
"html_url": "https://github.com/huggingface/datasets/pull/4660",
"diff_url": "https://github.com/huggingface/datasets/pull/4660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4660.patch",
"merged_at": "2022-07-07T16:52:07"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4659/comments | https://api.github.com/repos/huggingface/datasets/issues/4659/events | https://github.com/huggingface/datasets/pull/4659 | 1,297,094,140 | PR_kwDODunzps47AQo9 | 4,659 | Transfer CI to GitHub Actions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot @albertvillanova ! I hope we're finally done with flakiness on windows ^^\r\n\r\nAlso thanks for paying extra attention to billing and avoiding running unnecessary jobs. Though for certain aspects (see my comments), I think it's worth having the extra jobs to make our life easier",
"~@lhoestq I think you forgot to add your comments?~\r\n\r\nI had missed it among all the other comments...",
"@lhoestq, I'm specially enthusiastic with the fail-fast policy: it was in my TODO list for a long time. I really think it will have a positive impact (I would love to know the spent time saving it will enable, besides the carbon footprint reduction). :wink: \r\n\r\nSo yes, as you said above, let's give it a try at least. If we encounter any inconvenience, we can easily disable it.\r\n\r\nQuestion: I guess I have to disable CircleCI CI before merging this PR?\r\n\r\n"
] | 1,657,186,187,000 | 1,657,625,420,000 | 1,657,624,705,000 | MEMBER | null | This PR transfers CI from CircleCI to GitHub Actions. The implementation in GitHub Actions tries to be as faithful as possible to the implementation in CircleCI and get the same output results (exceptions below).
**IMPORTANT NOTE**: The fast-fail policy (described below) is not finally implemented, so that:
- we can continue merging PRs with CI in red because of some random error returned by the Hub
- it is not annoying for maintainers to have to relaunch failed CI jobs
See comments here: https://github.com/huggingface/datasets/pull/4659#discussion_r918802348
Differences in the implementation in GitHub Actions compared to the CircleCI one:
- This PR introduces some *fail-fast* mechanisms to significantly reduce the total time CI is running, both because of environmental impact and because CI in GitHub Actions billing depends on the minutes per month running time (see [About billing for GitHub Actions](https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions)):
- All tests *depend* on `check_code_quality` job: only if `check_code_quality` passes, then the other test jobs are launched
- The tests are implemented with a matrix strategy (cross-product: OS and PyArrow versions) and fail-fast: if any of the 4 processes fails, the others are cancelled
- OS dependencies for Linux (see table below)
| OS dependencies | Passed tests | Skipped tests |
| --- | ---: | ---: |
| libsndfile1-dev | 4786 | 3119 |
| libsndfile1 | 4786 | 3119 |
| libsndfile1, sox | 4788 | 3117 |
- This PR replaces `libsndfile1-dev` with `libsndfile1`: the same number of passing tests but less packages installed
- This PR adds `sox`: required by MP3 tests (2 more tests are passed: 4788 instead of 4786)
- For tests using PyArrow 6, this PR uses 6.0.1 instead of 6.0.0
TO DO:
- [ ] Remove old CircleCI CI: kept for the moment to compare stability and performance
Close #4658.
## Comparison between CircleCI and GitHub Actions
| | | CircleCI | GitHub Actions |
| --- | --- | ---: | ---: |
| Ubuntu, pyarrow-latest ||||
|| Passed tests | 4786 | 4788 |
|| Duration | 11m 0s | 10m 10s |
| Windows, pyarrow-latest ||||
|| Passed tests | 4783 | 4783 |
|| Duration | 29m 59s | 22m 56s | | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4659/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4659",
"html_url": "https://github.com/huggingface/datasets/pull/4659",
"diff_url": "https://github.com/huggingface/datasets/pull/4659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4659.patch",
"merged_at": "2022-07-12T11:18:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4658/comments | https://api.github.com/repos/huggingface/datasets/issues/4658/events | https://github.com/huggingface/datasets/issues/4658 | 1,297,001,390 | I_kwDODunzps5NTquu | 4,658 | Transfer CI tests to GitHub Actions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,657,181,450,000 | 1,657,624,705,000 | 1,657,624,705,000 | MEMBER | null | Let's try CI tests using GitHub Actions to see if they are more stable than on CircleCI. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4658/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4657/comments | https://api.github.com/repos/huggingface/datasets/issues/4657/events | https://github.com/huggingface/datasets/issues/4657 | 1,296,743,133 | I_kwDODunzps5NSrrd | 4,657 | Add SQuAD2.0 Dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"Hey, It's already present [here](https://huggingface.co/datasets/squad_v2) ",
"Hi! This dataset is indeed already available on the Hub. Closing."
] | 1,657,163,976,000 | 1,657,642,492,000 | 1,657,642,492,000 | NONE | null | ## Adding a Dataset
- **Name:** *SQuAD2.0*
- **Description:** *Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.*
- **Paper:** *https://aclanthology.org/P18-2124.pdf*
- **Data:** *https://rajpurkar.github.io/SQuAD-explorer/*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4657/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4656/comments | https://api.github.com/repos/huggingface/datasets/issues/4656/events | https://github.com/huggingface/datasets/issues/4656 | 1,296,740,266 | I_kwDODunzps5NSq-q | 4,656 | Add Amazon-QA Dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/Amazon-QA)."
] | 1,657,163,711,000 | 1,657,765,212,000 | 1,657,765,212,000 | NONE | null | ## Adding a Dataset
- **Name:** *Amazon-QA*
- **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).*
- **Paper:** *https://github.com/amazonqa/amazonqa/tree/master/paper*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4656/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4655/comments | https://api.github.com/repos/huggingface/datasets/issues/4655/events | https://github.com/huggingface/datasets/issues/4655 | 1,296,720,896 | I_kwDODunzps5NSmQA | 4,655 | Simple Wikipedia | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/simple-wiki)."
] | 1,657,162,286,000 | 1,657,764,993,000 | 1,657,764,993,000 | NONE | null | ## Adding a Dataset
- **Name:** *Simple Wikipedia*
- **Description:** *Two different versions of the data set now exist. Both were generated by aligning Simple English Wikipedia and English Wikipedia. A complete description of the extraction process can be found in "Simple English Wikipedia: A New Simplification Task", William Coster and David Kauchak (2011).*
- **Paper:** *https://aclanthology.org/P11-2117/*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4655/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4654/comments | https://api.github.com/repos/huggingface/datasets/issues/4654/events | https://github.com/huggingface/datasets/issues/4654 | 1,296,716,119 | I_kwDODunzps5NSlFX | 4,654 | Add Quora Question Triplets Dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/QQP_triplets)."
] | 1,657,161,822,000 | 1,657,764,830,000 | 1,657,764,830,000 | NONE | null | ## Adding a Dataset
- **Name:** *Quora Question Triplets*
- **Description:** *This dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.*
- **Paper:**
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates_triplets.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4654/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4653/comments | https://api.github.com/repos/huggingface/datasets/issues/4653/events | https://github.com/huggingface/datasets/issues/4653 | 1,296,702,834 | I_kwDODunzps5NSh1y | 4,653 | Add Altlex dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/altlex)."
] | 1,657,160,582,000 | 1,657,764,759,000 | 1,657,764,759,000 | NONE | null | ## Adding a Dataset
- **Name:** *Altlex*
- **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.”*
- **Paper:** *https://aclanthology.org/P16-1135.pdf*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4653/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4652/comments | https://api.github.com/repos/huggingface/datasets/issues/4652/events | https://github.com/huggingface/datasets/issues/4652 | 1,296,697,498 | I_kwDODunzps5NSgia | 4,652 | Add Sentence Compression Dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/sentence-compression)."
] | 1,657,160,026,000 | 1,657,764,708,000 | 1,657,764,708,000 | NONE | null | ## Adding a Dataset
- **Name:** *Sentence Compression*
- **Description:** *Large corpus of uncompressed and compressed sentences from news articles.*
- **Paper:** *https://www.aclweb.org/anthology/D13-1155/*
- **Data:** *https://github.com/google-research-datasets/sentence-compression/tree/master/data*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4652/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4651/comments | https://api.github.com/repos/huggingface/datasets/issues/4651/events | https://github.com/huggingface/datasets/issues/4651 | 1,296,689,414 | I_kwDODunzps5NSekG | 4,651 | Add Flickr 30k Dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/flickr30k-captions)."
] | 1,657,159,148,000 | 1,657,764,585,000 | 1,657,764,585,000 | NONE | null | ## Adding a Dataset
- **Name:** *Flickr 30k*
- **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved in everyday activities and events.*
- **Paper:** *https://transacl.org/ojs/index.php/tacl/article/view/229/33*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4651/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4650/comments | https://api.github.com/repos/huggingface/datasets/issues/4650/events | https://github.com/huggingface/datasets/issues/4650 | 1,296,680,037 | I_kwDODunzps5NScRl | 4,650 | Add SPECTER dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)"
] | 1,657,158,092,000 | 1,657,764,469,000 | null | NONE | null | ## Adding a Dataset
- **Name:** *SPECTER*
- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*
- **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4650/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4649/comments | https://api.github.com/repos/huggingface/datasets/issues/4649/events | https://github.com/huggingface/datasets/issues/4649 | 1,296,673,712 | I_kwDODunzps5NSauw | 4,649 | Add PAQ dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/PAQ_pairs)"
] | 1,657,157,382,000 | 1,657,764,387,000 | 1,657,764,387,000 | NONE | null | ## Adding a Dataset
- **Name:** *PAQ*
- **Description:** *This repository contains code and models to support the research paper PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them*
- **Paper:** *https://arxiv.org/abs/2102.07033*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4649/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4648/comments | https://api.github.com/repos/huggingface/datasets/issues/4648/events | https://github.com/huggingface/datasets/issues/4648 | 1,296,659,335 | I_kwDODunzps5NSXOH | 4,648 | Add WikiAnswers dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/WikiAnswers)"
] | 1,657,155,997,000 | 1,657,764,220,000 | 1,657,764,220,000 | NONE | null | ## Adding a Dataset
- **Name:** *WikiAnswers*
- **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.*
- **Paper:** *https://dl.acm.org/doi/10.1145/2623330.2623677*
- **Data:** *https://github.com/afader/oqa#wikianswers-corpus*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4648/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4647/comments | https://api.github.com/repos/huggingface/datasets/issues/4647/events | https://github.com/huggingface/datasets/issues/4647 | 1,296,311,270 | I_kwDODunzps5NRCPm | 4,647 | Add Reddit dataset | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | [] | 1,657,136,958,000 | 1,657,136,958,000 | null | NONE | null | ## Adding a Dataset
- **Name:** *Reddit comments (2015-2018)*
- **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using the tools in this directory.*
- **Paper:** *https://arxiv.org/abs/1904.06472*
- **Data:** *https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4647/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4645/comments | https://api.github.com/repos/huggingface/datasets/issues/4645/events | https://github.com/huggingface/datasets/pull/4645 | 1,296,027,785 | PR_kwDODunzps468oZ6 | 4,645 | Set HF_SCRIPTS_VERSION to main | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657,122,201,000 | 1,657,122,981,000 | 1,657,122,305,000 | MEMBER | null | After renaming "master" to "main", the CI fails with
```
AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/main/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at /home/circleci/datasets/_dummy/_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py"
```
This is because in the CI we were still using `HF_SCRIPTS_VERSION=master`. I changed it to "main" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4645/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4645",
"html_url": "https://github.com/huggingface/datasets/pull/4645",
"diff_url": "https://github.com/huggingface/datasets/pull/4645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4645.patch",
"merged_at": "2022-07-06T15:45:05"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4644/comments | https://api.github.com/repos/huggingface/datasets/issues/4644/events | https://github.com/huggingface/datasets/pull/4644 | 1,296,018,052 | PR_kwDODunzps468mQb | 4,644 | [Minor fix] Typo correction | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,657,121,822,000 | 1,657,122,992,000 | 1,657,122,316,000 | CONTRIBUTOR | null | recieve -> receive | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4644/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4644",
"html_url": "https://github.com/huggingface/datasets/pull/4644",
"diff_url": "https://github.com/huggingface/datasets/pull/4644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4644.patch",
"merged_at": "2022-07-06T15:45:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4643/comments | https://api.github.com/repos/huggingface/datasets/issues/4643/events | https://github.com/huggingface/datasets/pull/4643 | 1,295,852,650 | PR_kwDODunzps468Cqk | 4,643 | Rename master to main | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"All the mentions I found on google were simple URLs that will be redirected, so it's fine. I also checked the spaces and we should be good:\r\n- dalle-mini used to install the master branch but [it's no longer the case](https://huggingface.co/spaces/flax-community/dalle-mini/commit/b78c972afd5c2d2bed087be6479fe5c9c6cfa741)\r\n- same for [logo generator](https://huggingface.co/spaces/tom-doerr/logo_generator/commit/a9ea330e518870d0ca8f65abb56f71d86750d8e4)\r\n- I opened a PR to fix [vision-datasets-viewer](https://huggingface.co/spaces/nateraw/vision-datasets-viewer/discussions/1)\r\n",
"Ok let's rename the branch, and then we can merge this PR"
] | 1,657,114,470,000 | 1,657,121,806,000 | 1,657,121,108,000 | MEMBER | null | This PR renames mentions of "master" by "main" in the code base for several cases:
- set the default dataset script version to "main" if the local installation of `datasets` is a dev installation
- update URLs to this github repository to use "main"
- update the DVC benchmark
- update the github workflows
- update docstrings
- update tests to compare the changes in dataset cards against "main"
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4643/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4643",
"html_url": "https://github.com/huggingface/datasets/pull/4643",
"diff_url": "https://github.com/huggingface/datasets/pull/4643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4643.patch",
"merged_at": "2022-07-06T15:25:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4642/comments | https://api.github.com/repos/huggingface/datasets/issues/4642/events | https://github.com/huggingface/datasets/issues/4642 | 1,295,748,083 | I_kwDODunzps5NO4vz | 4,642 | Streaming issue for ccdv/pubmed-summarization | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting @lewtun.\r\n\r\nI confirm there is an issue with streaming: it does not stream locally. ",
"Oh, after investigation, the source of the issue is in the Hub dataset loading script.\r\n\r\nI'm opening a PR on the Hub dataset.",
"I've opened a PR on their Hub dataset to support streaming: https://huggingface.co/datasets/ccdv/pubmed-summarization/discussions/2"
] | 1,657,109,587,000 | 1,657,117,054,000 | 1,657,117,054,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/ccdv/pubmed-summarization
### Description
This was reported by a [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/7). It seems like streaming doesn't work due to the way the dataset loading script is defined?
```
Status code: 400
Exception: FileNotFoundError
Message: https://huggingface.co/datasets/ccdv/pubmed-summarization/resolve/main/train.zip/train.txt
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4642/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4641/comments | https://api.github.com/repos/huggingface/datasets/issues/4641/events | https://github.com/huggingface/datasets/issues/4641 | 1,295,633,250 | I_kwDODunzps5NOcti | 4,641 | Dataset Viewer issue for kmfoda/booksum | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting, @lewtun.\r\n\r\nIt works locally in streaming mode:\r\n```\r\n{'bid': 27681,\r\n 'is_aggregate': True,\r\n 'source': 'cliffnotes',\r\n 'chapter_path': 'all_chapterized_books/27681-chapters/chapters_1_to_2.txt',\r\n 'summary_path': 'finished_summaries/cliffnotes/The Last of the Mohicans/section_1_part_0.txt',\r\n 'book_id': 'The Last of the Mohicans.chapters 1-2',\r\n 'summary_id': 'chapters 1-2',\r\n 'content': None,\r\n 'summary': '{\"name\": \"Chapters 1-2\", \"url\": \"https://web.archive.org/web/20201101053205/https://www.cliffsnotes.com/literature/l/the-last-of-the-mohicans/summary-and-analysis/chapters-12\", \"summary\": \"Before any characters appear, the time and geography are made clear. Though it is the last war that England and France waged for a country that neither would retain, the wilderness between the forces still has to be...\r\n```\r\n\r\nI'm forcing the refresh of the preview. ",
"The preview appears as expected once the refresh forced.",
"Thank you @albertvillanova 🤗 !"
] | 1,657,103,896,000 | 1,657,113,928,000 | 1,657,108,686,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/kmfoda/booksum
### Description
A [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/9) discovered this dataset cannot be streamed due to:
```
Status code: 400
Exception: ClientResponseError
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/kmfoda/booksum/resolve/47953f583d6967f086cb16a2f4d2346e9834024d/test.csv')
```
I'm not sure why it says "Unauthorized" since it's just a bunch of CSV files in a repo
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4641/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4640/comments | https://api.github.com/repos/huggingface/datasets/issues/4640/events | https://github.com/huggingface/datasets/pull/4640 | 1,295,495,699 | PR_kwDODunzps4660rI | 4,640 | Support all split in streaming mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4640). All of your documentation changes will be reflected on that endpoint."
] | 1,657,097,798,000 | 1,657,120,795,000 | null | MEMBER | null | Fix #4637. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4640/timeline | null | null | 1 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4640",
"html_url": "https://github.com/huggingface/datasets/pull/4640",
"diff_url": "https://github.com/huggingface/datasets/pull/4640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4640.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4639/comments | https://api.github.com/repos/huggingface/datasets/issues/4639/events | https://github.com/huggingface/datasets/issues/4639 | 1,295,367,322 | I_kwDODunzps5NNbya | 4,639 | Add HaGRID -- HAnd Gesture Recognition Image Dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | [] | 1,657,093,292,000 | 1,657,093,292,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** HaGRID -- HAnd Gesture Recognition Image Dataset
- **Description:** We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc.
- **Paper:** https://arxiv.org/abs/2206.08219
- **Data:** https://github.com/hukenovs/hagrid
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4639/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4638/comments | https://api.github.com/repos/huggingface/datasets/issues/4638/events | https://github.com/huggingface/datasets/pull/4638 | 1,295,233,315 | PR_kwDODunzps4656H9 | 4,638 | The speechocean762 dataset | {
"login": "jimbozhang",
"id": 1777456,
"node_id": "MDQ6VXNlcjE3Nzc0NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1777456?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimbozhang",
"html_url": "https://github.com/jimbozhang",
"followers_url": "https://api.github.com/users/jimbozhang/followers",
"following_url": "https://api.github.com/users/jimbozhang/following{/other_user}",
"gists_url": "https://api.github.com/users/jimbozhang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimbozhang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimbozhang/subscriptions",
"organizations_url": "https://api.github.com/users/jimbozhang/orgs",
"repos_url": "https://api.github.com/users/jimbozhang/repos",
"events_url": "https://api.github.com/users/jimbozhang/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimbozhang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | [
"CircleCL reported two errors, but I didn't find the reason. The error message:\r\n```\r\n_________________ ERROR collecting tests/test_dataset_cards.py _________________\r\ntests/test_dataset_cards.py:53: in <module>\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\ntests/test_dataset_cards.py:35: in get_changed_datasets\r\n diff_output = check_output([\"git\", \"diff\", \"--name-only\", \"origin/master...HEAD\"], cwd=repo_path)\r\n../.pyenv/versions/3.6.15/lib/python3.6/subprocess.py:356: in check_output\r\n **kwargs).stdout\r\n../.pyenv/versions/3.6.15/lib/python3.6/subprocess.py:438: in run\r\n output=stdout, stderr=stderr)\r\nE subprocess.CalledProcessError: Command '['git', 'diff', '--name-only', 'origin/master...HEAD']' returned non-zero exit status 128.\r\n\r\n=========================== short test summary info ============================\r\nERROR tests/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\nERROR tests/test_dataset_cards.py - subprocess.CalledProcessError: Command '[...\r\n= 4011 passed, 2357 skipped, 2 xfailed, 1 xpassed, 116 warnings, 2 errors in 284.32s (0:04:44) =\r\n\r\nExited with code exit status 1\r\n```\r\nI'm not sure if it was caused by this PR ...\r\n\r\nI ran `tests/test_dataset_cards.py` in my local environment, and it passed:\r\n```\r\n(venv)$ pytest tests/test_dataset_cards.py\r\n============================== test session starts ==============================\r\nplatform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /home/zhangjunbo/src/datasets\r\nplugins: forked-1.4.0, datadir-1.3.1, xdist-2.5.0\r\ncollected 1531 items\r\n\r\ntests/test_dataset_cards.py ..... [100%]\r\n======================= 766 passed, 765 skipped in 2.55s ========================\r\n```\r\n",
"@sanchit-gandhi could you also maybe take a quick look? :-)",
"Thanks for your contribution, @jimbozhang. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help.",
"> Thanks for your contribution, @jimbozhang. Are you still interested in adding this dataset?\r\n> \r\n> We are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n> \r\n> We would suggest you create this dataset there. Please, feel free to tell us if you need some help.\r\n\r\nYes, I just planned to finish this dataset these days, and this suggestion is just in time! Thanks a lot!\r\nI will create this dataset to Hugging Face Hub soon, maybe this week."
] | 1,657,088,250,000 | 1,664,789,676,000 | 1,664,789,676,000 | NONE | null | [speechocean762](https://www.openslr.org/101/) is a non-native English corpus for pronunciation scoring tasks. It is free for both commercial and non-commercial use.
I believe it will be easier to use if it could be available on Hugging Face. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4638/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4638/timeline | null | null | 1 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4638",
"html_url": "https://github.com/huggingface/datasets/pull/4638",
"diff_url": "https://github.com/huggingface/datasets/pull/4638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4638.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4637/comments | https://api.github.com/repos/huggingface/datasets/issues/4637/events | https://github.com/huggingface/datasets/issues/4637 | 1,294,818,236 | I_kwDODunzps5NLVu8 | 4,637 | The "all" split breaks streaming | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting @cakiki.\r\n\r\nYes, this is a bug. We are investigating it.",
"@albertvillanova Nice! Let me know if it's something I can fix my self; would love to contribtue!",
"@cakiki I was working on this but if you would like to contribute, go ahead. I will close my PR. ;)\r\n\r\nFor the moment I just pushed the test (to see if it impacts other tests).",
"It impacted the test `test_generator_based_download_and_prepare` and I have fixed this.\r\n\r\nSo that you can copy the test I implemented in my PR and then implement a fix for this issue that passes the test `tests/test_builder.py::test_builder_as_streaming_dataset`.",
"Hi @cakiki are you still interested in working on this? Are you planning to open a PR?",
"Hi @albertvillanova ! Sorry it took so long; I wanted to spend this weekend working on it."
] | 1,657,058,209,000 | 1,657,893,570,000 | null | CONTRIBUTOR | null | ## Describe the bug
Not sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split="all"`
## Steps to reproduce the bug
The following works:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all')
```
The following throws `ValueError: Bad split: all. Available splits: ['train', 'validation', 'test']`:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all', streaming=True)
```
## Expected results
An iterator over all splits.
## Actual results
I had to do the following to achieve the desired result:
```python
from itertools import chain
ds = load_dataset('super_glue', 'wsc.fixed', streaming=True)
it = chain.from_iterable(ds.values())
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4637/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4636/comments | https://api.github.com/repos/huggingface/datasets/issues/4636/events | https://github.com/huggingface/datasets/issues/4636 | 1,294,547,836 | I_kwDODunzps5NKTt8 | 4,636 | Add info in docs about behavior of download_config.num_proc | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,657,040,460,000 | 1,659,004,832,000 | 1,659,004,832,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
I went to override `download_config.num_proc` and was confused about what was happening under the hood. It would be nice to have the behavior documented a bit better so folks know what's happening when they use it.
**Describe the solution you'd like**
- Add note about how the default number of workers is 16. Related code:
https://github.com/huggingface/datasets/blob/7bcac0a6a0fc367cc068f184fa132b8de8dfa11d/src/datasets/download/download_manager.py#L299-L302
- Add note that if the number of workers is higher than the number of files to download, it won't use multiprocessing.
**Describe alternatives you've considered**
maybe it would also be nice to set `num_proc` = `num_files` when `num_proc` > `num_files`.
**Additional context**
...
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4636/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4635/comments | https://api.github.com/repos/huggingface/datasets/issues/4635/events | https://github.com/huggingface/datasets/issues/4635 | 1,294,475,931 | I_kwDODunzps5NKCKb | 4,635 | Dataset Viewer issue for vadis/sv-ident | {
"login": "e-tornike",
"id": 20404466,
"node_id": "MDQ6VXNlcjIwNDA0NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-tornike",
"html_url": "https://github.com/e-tornike",
"followers_url": "https://api.github.com/users/e-tornike/followers",
"following_url": "https://api.github.com/users/e-tornike/following{/other_user}",
"gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions",
"organizations_url": "https://api.github.com/users/e-tornike/orgs",
"repos_url": "https://api.github.com/users/e-tornike/repos",
"events_url": "https://api.github.com/users/e-tornike/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-tornike/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting, @e-tornike \r\n\r\nSome context:\r\n- #4527 \r\n\r\nThe dataset loads locally in streaming mode:\r\n```python\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"vadis/sv-ident\", split=\"validation\", streaming=True); item = next(iter(ds)); item\r\nUsing custom data configuration default\r\nOut[2]: \r\n{'sentence': 'Im Falle von Umweltbelastungen kann selten eindeutig entschieden werden, ob Unbedenklichkeitswerte bereits erreicht oder überschritten sind, die die menschliche Gesundheit oder andere Wohlfahrts»güter« beeinträchtigen.',\r\n 'is_variable': 0,\r\n 'variable': [],\r\n 'research_data': [],\r\n 'doc_id': '51971',\r\n 'uuid': 'ee3d7f88-1a3e-4a59-997f-e986b544a604',\r\n 'lang': 'de'}\r\n```",
"~~I have forced the refresh of the split in the preview without success.~~\r\n\r\nI have forced the refresh of the split in the preview, and now it works.",
"Preview seems to work now. \r\n\r\nhttps://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation",
"OK, thank you @e-tornike.\r\n\r\nApparently, after forcing the refresh, we just had to wait a little until it is effectively refreshed. ",
"I'm closing this issue as it was solved after forcing the refresh of the split in the preview.",
"Thanks a lot! :)"
] | 1,657,036,093,000 | 1,657,091,613,000 | 1,657,091,534,000 | NONE | null | ### Link
https://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation
### Description
Error message when loading validation split in the viewer:
```
Status code: 400
Exception: Status400Error
Message: The split cache is empty.
```
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4635/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4635/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4634/comments | https://api.github.com/repos/huggingface/datasets/issues/4634/events | https://github.com/huggingface/datasets/issues/4634 | 1,294,405,251 | I_kwDODunzps5NJw6D | 4,634 | Can't load the Hausa audio dataset | {
"login": "moro23",
"id": 19976800,
"node_id": "MDQ6VXNlcjE5OTc2ODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/19976800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moro23",
"html_url": "https://github.com/moro23",
"followers_url": "https://api.github.com/users/moro23/followers",
"following_url": "https://api.github.com/users/moro23/following{/other_user}",
"gists_url": "https://api.github.com/users/moro23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moro23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moro23/subscriptions",
"organizations_url": "https://api.github.com/users/moro23/orgs",
"repos_url": "https://api.github.com/users/moro23/repos",
"events_url": "https://api.github.com/users/moro23/events{/privacy}",
"received_events_url": "https://api.github.com/users/moro23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid."
] | 1,657,032,456,000 | 1,663,078,052,000 | 1,663,078,052,000 | NONE | null | common_voice_train = load_dataset("common_voice", "ha", split="train+validation") | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4634/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4633/comments | https://api.github.com/repos/huggingface/datasets/issues/4633/events | https://github.com/huggingface/datasets/pull/4633 | 1,294,367,783 | PR_kwDODunzps462_qX | 4,633 | [data_files] Only match separated split names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran a script to find affected datasets (just did it on non-private non-gated). Adding \"testing\" and \"evaluation\" fixes all of of them except one:\r\n- projecte-aina/cat_manynames:\thuman_annotated_testset.tsv\r\n\r\nLet me open a PR on their repository to fix it\r\nEDIT: pr [here](https://huggingface.co/datasets/projecte-aina/cat_manynames/discussions/2)",
"Feel free to merge @albertvillanova if it's all good to you :)",
"Thanks for the feedback @albertvillanova I took your comments into account :)\r\n- added numbers as supported delimiters\r\n- used list comprehension to create the patterns list\r\n- updated the docs and the tests according to your comments\r\n\r\nLet me know what you think !",
"I ended up removing the patching and the context manager :) merging"
] | 1,657,030,691,000 | 1,658,150,429,000 | 1,658,149,653,000 | MEMBER | null | As reported in https://github.com/huggingface/datasets/issues/4477, the current pattern matching to infer which file goes into which split is too permissive. For example a file "contest.py" would be considered part of a test split (it contains "test") and "seqeval.py" as well (it contains "eval").
In this PR I made the pattern matching more robust by only matching split names **between separators**. The supported separators are dots, dashes, spaces and underscores.
I updated the docs accordingly.
One detail about the tests: I had to update one test because it was using `PurePath.match` as a reference for globbing, but it doesn't support the `[..]` glob pattern. Therefore I added a `mock_fs` context manager that can be used to easily define a dummy filesystem with certain files in it and run pattern matching tests. Its code comes mostly from test_streaming_download_manager.py
Close https://github.com/huggingface/datasets/issues/4477 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4633/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4633",
"html_url": "https://github.com/huggingface/datasets/pull/4633",
"diff_url": "https://github.com/huggingface/datasets/pull/4633.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4633.patch",
"merged_at": "2022-07-18T13:07:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4632/comments | https://api.github.com/repos/huggingface/datasets/issues/4632/events | https://github.com/huggingface/datasets/issues/4632 | 1,294,166,880 | I_kwDODunzps5NI2tg | 4,632 | 'sort' method sorts one column only | {
"login": "shachardon",
"id": 42108562,
"node_id": "MDQ6VXNlcjQyMTA4NTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/42108562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shachardon",
"html_url": "https://github.com/shachardon",
"followers_url": "https://api.github.com/users/shachardon/followers",
"following_url": "https://api.github.com/users/shachardon/following{/other_user}",
"gists_url": "https://api.github.com/users/shachardon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shachardon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shachardon/subscriptions",
"organizations_url": "https://api.github.com/users/shachardon/orgs",
"repos_url": "https://api.github.com/users/shachardon/repos",
"events_url": "https://api.github.com/users/shachardon/events{/privacy}",
"received_events_url": "https://api.github.com/users/shachardon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made you think it was not the case ? Did you experience a situation where it was only sorting one column ?",
"Hi! thank you for your quick reply!\r\nI wanted to sort the `cnn_dailymail` dataset by the length of the labels (num of characters). I added a new column to the dataset (`ds.add_column`) with the lengths and then sorted by this new column. Only the new length column was sorted, the reset left in their original order. ",
"That's unexpected, can you share the code you used to get this ?"
] | 1,657,020,326,000 | 1,657,195,592,000 | null | NONE | null | The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4632/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4631/comments | https://api.github.com/repos/huggingface/datasets/issues/4631/events | https://github.com/huggingface/datasets/pull/4631 | 1,293,545,900 | PR_kwDODunzps460Vy0 | 4,631 | Update WinoBias README | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,966,280,000 | 1,657,200,212,000 | 1,657,199,507,000 | NONE | null | I'm adding some information about Winobias that I got from the paper :smile:
I think this makes it a bit clearer! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4631/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4631/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4631",
"html_url": "https://github.com/huggingface/datasets/pull/4631",
"diff_url": "https://github.com/huggingface/datasets/pull/4631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4631.patch",
"merged_at": "2022-07-07T13:11:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4630/comments | https://api.github.com/repos/huggingface/datasets/issues/4630/events | https://github.com/huggingface/datasets/pull/4630 | 1,293,470,728 | PR_kwDODunzps460HFM | 4,630 | fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py. | {
"login": "gugarosa",
"id": 4120639,
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gugarosa",
"html_url": "https://github.com/gugarosa",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,959,215,000 | 1,657,034,392,000 | 1,657,033,701,000 | CONTRIBUTOR | null | Fix #4612.
Apparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`.
Thus, @mariosasko suggested to add the missing part to the module import to allow for its access. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4630/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4630/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4630",
"html_url": "https://github.com/huggingface/datasets/pull/4630",
"diff_url": "https://github.com/huggingface/datasets/pull/4630.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4630.patch",
"merged_at": "2022-07-05T15:08:21"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4629/comments | https://api.github.com/repos/huggingface/datasets/issues/4629/events | https://github.com/huggingface/datasets/issues/4629 | 1,293,418,800 | I_kwDODunzps5NGAEw | 4,629 | Rename repo default branch to main | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,656,954,970,000 | 1,657,122,597,000 | 1,657,122,597,000 | MEMBER | null | Rename repository default branch to `main` (instead of current `master`).
Once renamed, users will have to manually update their local repos:
- [ ] Upstream:
```
git branch -m master main
git fetch upstream main
git branch -u upstream/main main
git remote set-head upstream -a
```
- [ ] Origin:
Rename fork default branch as well at: https://github.com/USERNAME/lam/settings/branches
Then:
```
git fetch origin main
git remote set-head origin -a
```
CC: @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4629/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4629/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4628/comments | https://api.github.com/repos/huggingface/datasets/issues/4628/events | https://github.com/huggingface/datasets/pull/4628 | 1,293,361,308 | PR_kwDODunzps46zvFJ | 4,628 | Fix time type `_arrow_to_datasets_dtype` conversion | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,951,615,000 | 1,657,202,918,000 | 1,657,202,232,000 | CONTRIBUTOR | null | Fix #4620
The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(type))` to convert them both to the `Time64Type(time64[unit])` format.
cc @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4628/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4628/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4628",
"html_url": "https://github.com/huggingface/datasets/pull/4628",
"diff_url": "https://github.com/huggingface/datasets/pull/4628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4628.patch",
"merged_at": "2022-07-07T13:57:11"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4627/comments | https://api.github.com/repos/huggingface/datasets/issues/4627/events | https://github.com/huggingface/datasets/pull/4627 | 1,293,287,798 | PR_kwDODunzps46zfNa | 4,627 | fixed duplicate calculation of spearmanr function in metrics wrapper. | {
"login": "benlipkin",
"id": 38060297,
"node_id": "MDQ6VXNlcjM4MDYwMjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/38060297?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benlipkin",
"html_url": "https://github.com/benlipkin",
"followers_url": "https://api.github.com/users/benlipkin/followers",
"following_url": "https://api.github.com/users/benlipkin/following{/other_user}",
"gists_url": "https://api.github.com/users/benlipkin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benlipkin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benlipkin/subscriptions",
"organizations_url": "https://api.github.com/users/benlipkin/orgs",
"repos_url": "https://api.github.com/users/benlipkin/repos",
"events_url": "https://api.github.com/users/benlipkin/events{/privacy}",
"received_events_url": "https://api.github.com/users/benlipkin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, can open a PR in `evaluate` as well to optimize this.\r\n\r\nRelatedly, I wanted to add a new metric, Kendall Tau (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kendalltau.html). If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the `datasets` or `evaluate` repo (or both)?\r\n\r\nThanks!",
"PR opened in`evaluate` library with same minor adjustment: https://github.com/huggingface/evaluate/pull/176 ",
"> If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the datasets or evaluate repo (or both)?\r\n\r\nI think you could just add it to `evaluate`, we're not adding new metrics in this repo anymore"
] | 1,656,946,921,000 | 1,657,197,669,000 | 1,657,197,669,000 | CONTRIBUTOR | null | During _compute, the scipy.stats spearmanr function was called twice, redundantly, once for calculating the score and once for calculating the p-value, under the conditional branch where return_pvalue=True. I adjusted the _compute function to execute the spearmanr function once, store the results tuple in a temporary variable, and then pass the indexed contents to the expected keys of the returned dictionary. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4627/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4627/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4627",
"html_url": "https://github.com/huggingface/datasets/pull/4627",
"diff_url": "https://github.com/huggingface/datasets/pull/4627.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4627.patch",
"merged_at": "2022-07-07T12:41:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4626/comments | https://api.github.com/repos/huggingface/datasets/issues/4626/events | https://github.com/huggingface/datasets/issues/4626 | 1,293,256,269 | I_kwDODunzps5NFYZN | 4,626 | Add non-commercial licensing info for datasets for which we removed tags | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"yep plus `license_details` also makes sense for this IMO"
] | 1,656,945,163,000 | 1,657,290,449,000 | null | MEMBER | null | We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753
Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c85de4eda5d152dfede7671491449cb/src/datasets/utils/resources/standard_licenses.tsv)
We should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4626/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4626/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4625/comments | https://api.github.com/repos/huggingface/datasets/issues/4625/events | https://github.com/huggingface/datasets/pull/4625 | 1,293,163,744 | PR_kwDODunzps46zELz | 4,625 | Unpack `dl_manager.iter_files` to allow parallization | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool thanks ! Yup it sounds like the right solution.\r\n\r\nIt looks like `_generate_tables` needs to be updated as well to fix the CI"
] | 1,656,940,618,000 | 1,657,019,514,000 | 1,657,018,848,000 | CONTRIBUTOR | null | Iterate over data files outside `dl_manager.iter_files` to allow parallelization in streaming mode.
(The issue reported [here](https://discuss.huggingface.co/t/dataset-only-have-n-shard-1-when-has-multiple-shards-in-repo/19887))
PS: Another option would be to override `FilesIterable.__getitem__` to make it indexable and check for that type in `_shard_kwargs` and `n_shards,` but IMO this solution adds too much unnecessary complexity. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4625/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4625/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4625",
"html_url": "https://github.com/huggingface/datasets/pull/4625",
"diff_url": "https://github.com/huggingface/datasets/pull/4625.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4625.patch",
"merged_at": "2022-07-05T11:00:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4624/comments | https://api.github.com/repos/huggingface/datasets/issues/4624/events | https://github.com/huggingface/datasets/pull/4624 | 1,293,085,058 | PR_kwDODunzps46yzOK | 4,624 | Remove all paperswithcode_id: null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\n@lhoestq maybe it's better to accept it on the Hub side then? Let me know if you want us to do it Hub-side",
"Yup it's maybe better to support it on the Hub side then indeed, thanks ! Closing this one"
] | 1,656,936,692,000 | 1,656,940,920,000 | 1,656,940,238,000 | MEMBER | null | On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`:
<img width="686" alt="image" src="https://user-images.githubusercontent.com/42851186/177151825-93d341c5-25bd-41ab-96c2-c0b516d51c68.png">
We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.
To have the validation working again we can simply remove all the `paperswithcode_id: null`.
cc @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4624/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4624/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4624",
"html_url": "https://github.com/huggingface/datasets/pull/4624",
"diff_url": "https://github.com/huggingface/datasets/pull/4624.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4624.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4623/comments | https://api.github.com/repos/huggingface/datasets/issues/4623/events | https://github.com/huggingface/datasets/issues/4623 | 1,293,042,894 | I_kwDODunzps5NEkTO | 4,623 | Loading MNIST as Pytorch Dataset | {
"login": "jameschapman19",
"id": 56592797,
"node_id": "MDQ6VXNlcjU2NTkyNzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/56592797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jameschapman19",
"html_url": "https://github.com/jameschapman19",
"followers_url": "https://api.github.com/users/jameschapman19/followers",
"following_url": "https://api.github.com/users/jameschapman19/following{/other_user}",
"gists_url": "https://api.github.com/users/jameschapman19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jameschapman19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jameschapman19/subscriptions",
"organizations_url": "https://api.github.com/users/jameschapman19/orgs",
"repos_url": "https://api.github.com/users/jameschapman19/repos",
"events_url": "https://api.github.com/users/jameschapman19/events{/privacy}",
"received_events_url": "https://api.github.com/users/jameschapman19/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | [
"Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ",
"So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ",
"This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```",
"Hi! `set_transform`/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!"
] | 1,656,934,390,000 | 1,656,945,650,000 | null | NONE | null | ## Describe the bug
Conversion of MNIST dataset to pytorch fails with bug
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mnist", split="train")
dataset.set_format('torch')
dataset[0]
print()
```
## Expected results
Expect to see torch tensors image and label
## Actual results
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/chapm/PycharmProjects/multiviewdata/multiviewdata/huggingface/mnist.py", line 13, in <module>
dataset[0]
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2154, in __getitem__
return self._getitem(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2139, in _getitem
formatted_output = format_table(
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 356, in map_nested
mapped = [
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 357, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 293, in _single_map_nested
return function(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
python-BaseException
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Windows-10-10.0.22579-SP0
- Python version: 3.9.2
- PyArrow version: 8.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4623/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4623/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4622/comments | https://api.github.com/repos/huggingface/datasets/issues/4622/events | https://github.com/huggingface/datasets/pull/4622 | 1,293,031,939 | PR_kwDODunzps46ynmT | 4,622 | Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present) | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq @mariosasko pls take a look at https://github.com/huggingface/datasets/pull/4622/commits/769e4c046a5bd5e3a4dbd09cfad1f4cf60677869. I modified `_generate_examples()` according to the same logic too: removed checking if `metadata_files` are not empty for the case when `self.config.drop_metadata=True` because I think we should be aligned with the config and preserve labels if `self.config.drop_labels=False` (the default value) and `self.config.drop_metadata=True` but `metadata_files` are passed. This is an extremely unlikely use case (when `self.config.drop_metadata=True`, but `metadata_files` are passed to `_generate_examples()`) since users usually do not use `_generate_examples()` alone but I believe it would be consistent to have the same behavior as in `_splits_generators()`. This change requires change in tests too if we suppose that we want to preserve labels (default value of `self.config.drop_labels` is False) when `self.config.drop_metadata=True`, even if `metadata_files` are for some reason provided (as it is done in tests). \r\n\r\nwdyt about this change?\r\n",
"@lhoestq it wouldn't raise an error if we check `example.keys() == {\"image\", \"label\"}` as test checks only `_generate_examples`, not `encode_example`. and in the implementation of this PR `_generate_examples` would return both `image` and `label` key in the case when `drop_metadata=True` and `drop_labels=False` (default) as it seems that we agreed on that :)",
"and on the other hand it would raise an error if `label` column is missing in _generate_examples when `drop_metadata=True` and `drop_labels=False`\r\n\r\nby \"it\" i mean tests :D (`test_generate_examples_with_metadata_that_misses_one_image`, `test_generate_examples_with_metadata_in_wrong_location` and `test_generate_examples_drop_metadata`)",
"Perhaps we could make `self.config.drop_metadata = None` and `self.config.drop_labels = None` the defaults to see explicitly what the user wants. This would then turn into `self.config.drop_metadata = False` and `self.config.drop_labels = True` if metadata files are present and `self.config.drop_metadata = True` and `self.config.drop_labels = False` if not. And if the user wants to have the `label` column alongside metadata columns, it can do so by passing `drop_labels = False` explicitely (in that scenario we have to check that the `label` column is not already present in metadata files). And maybe we can also improve the logging messages.\r\n\r\nI find it problematic that the current implementation drops labels in some scenarios even if `self.config.drop_labels = False`, and the user doesn't have control over this behavior.\r\n\r\nLet me know what you think."
] | 1,656,933,800,000 | 1,657,895,843,000 | 1,657,895,064,000 | CONTRIBUTOR | null | Will fix #4621
ImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following condition doesn't pass: https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/imagefolder/imagefolder.py#L167
So I suggest to double check it inside `analyze()` not to collect metadata files if they are not needed. (and labels too, to be consistent)
---
Also, I added a test to check if labels are inferred correctly from directories names in general (because we didn't have it) :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4622/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4622/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4622",
"html_url": "https://github.com/huggingface/datasets/pull/4622",
"diff_url": "https://github.com/huggingface/datasets/pull/4622.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4622.patch",
"merged_at": "2022-07-15T14:24:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4621/comments | https://api.github.com/repos/huggingface/datasets/issues/4621/events | https://github.com/huggingface/datasets/issues/4621 | 1,293,030,128 | I_kwDODunzps5NEhLw | 4,621 | ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,656,933,704,000 | 1,657,895,064,000 | 1,657,895,064,000 | CONTRIBUTOR | null | ## Describe the bug
If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either.
## Steps to reproduce the bug
### Clone an example dataset from the Hub
```bash
git clone https://huggingface.co/datasets/nateraw/test-imagefolder-metadata
```
### Try to load it
```python
from datasets import load_dataset
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True, drop_labels=False)
```
or even just
```python
ds = load_dataset("test-imagefolder-metadata", drop_metadata=True)
```
as `drop_labels=False` is a default value.
## Expected results
A DatasetDict object with two features: `"image"` and `"label"`.
## Actual results
```
Traceback (most recent call last):
File "/home/polina/workspace/datasets/debug.py", line 18, in <module>
ds = load_dataset(
File "/home/polina/workspace/datasets/src/datasets/load.py", line 1732, in load_dataset
builder_instance.download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1218, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1596, in encode_example
return encode_nested_example(self, example)
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in encode_nested_example
{
File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in <dictcomp>
{
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'label'
```
## Environment info
`datasets` master branch
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4621/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/4621/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4620/comments | https://api.github.com/repos/huggingface/datasets/issues/4620/events | https://github.com/huggingface/datasets/issues/4620 | 1,292,797,878 | I_kwDODunzps5NDoe2 | 4,620 | Data type is not recognized when using datetime.time | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @mariosasko ",
"Hi, thanks for reporting! I'm investigating the issue."
] | 1,656,922,418,000 | 1,657,202,231,000 | 1,657,202,231,000 | CONTRIBUTOR | null | ## Describe the bug
Creating a dataset from a pandas dataframe with `datetime.time` format generates an error.
## Steps to reproduce the bug
```python
import pandas as pd
from datetime import time
from datasets import Dataset
df = pd.DataFrame({"feature_name": [time(1, 1, 1)]})
dataset = Dataset.from_pandas(df)
```
## Expected results
The dataset should be created.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 823, in from_pandas
return cls(table, info=info, split=split)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 679, in __init__
inferred_features = Features.from_arrow_schema(arrow_table.schema)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1315, in generate_from_arrow_type
return Value(dtype=_arrow_to_datasets_dtype(pa_type))
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 83, in _arrow_to_datasets_dtype
return f"time64[{arrow_type.unit}]"
AttributeError: 'pyarrow.lib.DataType' object has no attribute 'unit'
```
## Environment info
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.4.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4620/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4620/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4619/comments | https://api.github.com/repos/huggingface/datasets/issues/4619/events | https://github.com/huggingface/datasets/issues/4619 | 1,292,107,275 | I_kwDODunzps5NA_4L | 4,619 | np arrays get turned into native lists | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | [
"If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nIn [3]: dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\nIn [4]: dataset2[0][\"tmp\"]\r\nOut[4]: [0.5]\r\n\r\nIn [5]: dataset2.set_format('np')\r\n\r\nIn [6]: dataset2[0][\"tmp\"]\r\nOut[6]: array([0.5])\r\n```",
"I see, thanks! Any idea if the default numpy → list conversion might cause precision loss?",
"I'm not super familiar with our datasets works internally, but I think your `np` array will be stored in a `pyarrow` format, and then you take a view of this as a python array. In which case, I think the precision should be preserved."
] | 1,656,784,497,000 | 1,656,880,027,000 | null | NONE | null | ## Describe the bug
When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen?
## Steps to reproduce the bug
```python
>>> import datasets, numpy as np
>>> dataset = datasets.load_dataset("glue", "mrpc")["validation"]
Reusing dataset glue (...)
100%|███████████████████████████████████████████████| 3/3 [00:00<00:00, 1360.61it/s]
>>> dataset2 = dataset.map(lambda x: {"tmp": np.array([0.5])}, batched=False)
100%|██████████████████████████████████████████| 408/408 [00:00<00:00, 10819.97ex/s]
>>> dataset2[0]["tmp"]
[0.5]
>>> type(dataset2[0]["tmp"])
<class 'list'>
```
## Expected results
`dataset2[0]["tmp"]` should be an `np.ndarray`.
## Actual results
It's a list.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: mac, though I'm pretty sure it happens on a linux machine too
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4619/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4619/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4618/comments | https://api.github.com/repos/huggingface/datasets/issues/4618/events | https://github.com/huggingface/datasets/issues/4618 | 1,292,078,225 | I_kwDODunzps5NA4yR | 4,618 | contribute data loading for object detection datasets with yolo data format | {
"login": "faizankshaikh",
"id": 8406903,
"node_id": "MDQ6VXNlcjg0MDY5MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faizankshaikh",
"html_url": "https://github.com/faizankshaikh",
"followers_url": "https://api.github.com/users/faizankshaikh/followers",
"following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}",
"gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions",
"organizations_url": "https://api.github.com/users/faizankshaikh/orgs",
"repos_url": "https://api.github.com/users/faizankshaikh/repos",
"events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}",
"received_events_url": "https://api.github.com/users/faizankshaikh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoestq @albertvillanova @polinaeterna?",
"@mariosasko sounds good to me!\r\n",
"Thank you for the suggestion @mariosasko . I agree with the point, but I have a few doubts\r\n\r\n1. How would the user access the script if it's not a part of the core codebase?\r\n2. Could you direct me as to what will be the tasks I have to do to contribute to the code? As per my understanding, it would be like\r\n 1. Create a new org \"hf-loaders\" and add you (and more HF people) to the org\r\n 2. Add data loader script as a (model?)\r\n 3. Test it with a dataset on HF hub\r\n3. We should maybe brainstorm as to which public datasets have this format (YOLO type) and are the most important ones to test the script with. We can even add the datasets on HF Hub alongside the script",
"1. Like this: `load_dataset(\"hf-loaders/yolo\", data_files=...)`\r\n2. The steps would be:\r\n 1. Create a new org `hf-community-loaders` (IMO a better name than \"hf-loaders\") and add me (as an admin)\r\n 2. Create a new dataset repo `yolo` and add the loading script to it (`yolo.py`)\r\n 3. Open a discussion to request our review\r\n4. I like this idea. Another option is to add snippets that describe how to load such datasets using the `yolo` loader."
] | 1,656,775,319,000 | 1,658,412,644,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/2))
**Describe the solution you'd like**
I wrote a [custom script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) to load dataset which has YOLO data format.
**Describe alternatives you've considered**
The script can either be a standalone dataset builder, or a modified version of `ImageFolder`
**Additional context**
I would be happy to contribute to this, but I would do it at a very slow pace (maybe a month or two) as I have my exams approaching 😄
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4618/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4618/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4615/comments | https://api.github.com/repos/huggingface/datasets/issues/4615/events | https://github.com/huggingface/datasets/pull/4615 | 1,291,307,428 | PR_kwDODunzps46tADt | 4,615 | Fix `embed_storage` on features inside lists/sequences | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,676,328,000 | 1,657,282,390,000 | 1,657,281,696,000 | CONTRIBUTOR | null | Add a dedicated function for embed_storage to always preserve the embedded/casted arrays (and to have more control over `embed_storage` in general).
Fix #4591
~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4615/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4615/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4615",
"html_url": "https://github.com/huggingface/datasets/pull/4615",
"diff_url": "https://github.com/huggingface/datasets/pull/4615.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4615.patch",
"merged_at": "2022-07-08T12:01:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4614 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4614/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4614/comments | https://api.github.com/repos/huggingface/datasets/issues/4614/events | https://github.com/huggingface/datasets/pull/4614 | 1,291,218,020 | PR_kwDODunzps46ssfw | 4,614 | Ensure ConcatenationTable.cast uses target_schema metadata | {
"login": "dtuit",
"id": 8114067,
"node_id": "MDQ6VXNlcjgxMTQwNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8114067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dtuit",
"html_url": "https://github.com/dtuit",
"followers_url": "https://api.github.com/users/dtuit/followers",
"following_url": "https://api.github.com/users/dtuit/following{/other_user}",
"gists_url": "https://api.github.com/users/dtuit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dtuit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dtuit/subscriptions",
"organizations_url": "https://api.github.com/users/dtuit/orgs",
"repos_url": "https://api.github.com/users/dtuit/repos",
"events_url": "https://api.github.com/users/dtuit/events{/privacy}",
"received_events_url": "https://api.github.com/users/dtuit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @lhoestq, Thanks for the detailed comment. I've tested the suggested approach and can confirm it works for the testcase outlined above! The PR is updated with the changes.",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,670,928,000 | 1,658,238,525,000 | 1,658,237,784,000 | CONTRIBUTOR | null | Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable.
Code example of where issue arrises:
```
from datasets import Dataset, Image
column1 = [0, 1]
image_paths = ['/images/image1.jpg', '/images/image2.jpg']
ds = Dataset.from_dict({"column1": column1})
ds = ds.add_column("image", image_paths)
ds.cast_column("image", Image()) # Fails here
```
Output
```
...
TypeError: Couldn't cast array of type
string
to
{'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)}
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4614/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4614/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4614",
"html_url": "https://github.com/huggingface/datasets/pull/4614",
"diff_url": "https://github.com/huggingface/datasets/pull/4614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4614.patch",
"merged_at": "2022-07-19T13:36:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4613 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4613/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4613/comments | https://api.github.com/repos/huggingface/datasets/issues/4613/events | https://github.com/huggingface/datasets/pull/4613 | 1,291,181,193 | PR_kwDODunzps46skd6 | 4,613 | Align/fix license metadata info | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you thank you! Let's merge and pray? 😱 ",
"I just need to add `license_details` to the validator and yup we can merge"
] | 1,656,669,050,000 | 1,656,680,037,000 | 1,656,679,367,000 | MEMBER | null | fix bad "other-*" licenses and add the corresponding "license_details" when relevant | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4613/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4613/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4613",
"html_url": "https://github.com/huggingface/datasets/pull/4613",
"diff_url": "https://github.com/huggingface/datasets/pull/4613.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4613.patch",
"merged_at": "2022-07-01T12:42:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4612/comments | https://api.github.com/repos/huggingface/datasets/issues/4612/events | https://github.com/huggingface/datasets/issues/4612 | 1,290,984,660 | I_kwDODunzps5M8tzU | 4,612 | Release 2.3.0 broke custom iterable datasets | {
"login": "aapot",
"id": 19529125,
"node_id": "MDQ6VXNlcjE5NTI5MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aapot",
"html_url": "https://github.com/aapot",
"followers_url": "https://api.github.com/users/aapot/followers",
"following_url": "https://api.github.com/users/aapot/following{/other_user}",
"gists_url": "https://api.github.com/users/aapot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aapot/subscriptions",
"organizations_url": "https://api.github.com/users/aapot/orgs",
"repos_url": "https://api.github.com/users/aapot/repos",
"events_url": "https://api.github.com/users/aapot/events{/privacy}",
"received_events_url": "https://api.github.com/users/aapot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [
"Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.",
"Hi! I think it's easier to replace `import fsspec` with `import fsspec.asyn` and leave the rest unchanged. @gugarosa Are you interested in submitting a PR?",
"Perfect, it is even better!\r\n\r\nJust submitted the PR: #4630.\r\n\r\nThank you!"
] | 1,656,657,967,000 | 1,657,033,701,000 | 1,657,033,701,000 | NONE | null | ## Describe the bug
Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0.
## Steps to reproduce the bug
```python
next(iter(custom_iterable_dataset))
```
## Expected results
`next(iter(custom_iterable_dataset))` should return examples from the dataset
## Actual results
```
/usr/local/lib/python3.7/dist-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py in _set_fsspec_for_multiprocess()
16 See https://github.com/fsspec/gcsfs/issues/379
17 """
---> 18 fsspec.asyn.iothread[0] = None
19 fsspec.asyn.loop[0] = None
20
AttributeError: module 'fsspec' has no attribute 'asyn'
```
## Environment info
- `datasets` version: 2.3.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 8.0.0
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4612/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4611/comments | https://api.github.com/repos/huggingface/datasets/issues/4611/events | https://github.com/huggingface/datasets/pull/4611 | 1,290,940,874 | PR_kwDODunzps46rxIX | 4,611 | Preserve member order by MockDownloadManager.iter_archive | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,654,500,000 | 1,656,694,751,000 | 1,656,694,108,000 | MEMBER | null | Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive.
See issue in:
- https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027
This PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4611/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4611",
"html_url": "https://github.com/huggingface/datasets/pull/4611",
"diff_url": "https://github.com/huggingface/datasets/pull/4611.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4611.patch",
"merged_at": "2022-07-01T16:48:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4610/comments | https://api.github.com/repos/huggingface/datasets/issues/4610/events | https://github.com/huggingface/datasets/issues/4610 | 1,290,603,827 | I_kwDODunzps5M7Q0z | 4,610 | codeparrot/github-code failing to load | {
"login": "PyDataBlog",
"id": 29863388,
"node_id": "MDQ6VXNlcjI5ODYzMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/29863388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PyDataBlog",
"html_url": "https://github.com/PyDataBlog",
"followers_url": "https://api.github.com/users/PyDataBlog/followers",
"following_url": "https://api.github.com/users/PyDataBlog/following{/other_user}",
"gists_url": "https://api.github.com/users/PyDataBlog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PyDataBlog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PyDataBlog/subscriptions",
"organizations_url": "https://api.github.com/users/PyDataBlog/orgs",
"repos_url": "https://api.github.com/users/PyDataBlog/repos",
"events_url": "https://api.github.com/users/PyDataBlog/events{/privacy}",
"received_events_url": "https://api.github.com/users/PyDataBlog/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"I believe the issue is in `codeparrot/github-code`. `base_path` param is missing - https://huggingface.co/datasets/codeparrot/github-code/blob/main/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps://github.com/huggingface/datasets/blob/0e1c629cfb9f9ba124537ba294a0ec451584da5f/src/datasets/data_files.py#L547\r\n\r\n@mariosasko could you please confirm my finding? And are there any changes that need to be done from my side?",
"Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it",
"> Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it\n\nI can't wait for that releasee. Broke my application",
"This simple workaround should fix: https://huggingface.co/datasets/codeparrot/github-code/discussions/2\r\n\r\n`get_patterns_in_dataset_repository` can treat whether `base_path=None`, so we just need to make sure that codeparrot/github-code `_split_generators` calls with such an argument.",
"I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ? \r\n@lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?",
"Actually I think it's just simpler to fix it in the dataset itself, let me open a PR\r\n\r\nEDIT: PR opened here: https://huggingface.co/datasets/codeparrot/github-code/discussions/3",
"PR is merged, it's working now ! Closing this one :)",
"> I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ?\r\n> @lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?\r\n\r\nYou are definitely right, sorry about it. I always keep forgetting that we need to keep in mind users from past versions, my bad."
] | 1,656,620,688,000 | 1,657,031,053,000 | 1,657,012,796,000 | NONE | null | ## Describe the bug
codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'`
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
loaded dataset object
## Actual results
```python
[3]: dataset = load_dataset("codeparrot/github-code")
No config specified, defaulting to: github-code/all-all
Downloading and preparing dataset github-code/all-all to /home/bebr/.cache/huggingface/datasets/codeparrot___github-code/all-all/0.0.0/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 dataset = load_dataset("codeparrot/github-code")
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1678 # Download and prepare data
-> 1679 builder_instance.download_and_prepare(
1680 download_config=download_config,
1681 download_mode=download_mode,
1682 ignore_verifications=ignore_verifications,
1683 try_from_hf_gcs=try_from_hf_gcs,
1684 use_auth_token=use_auth_token,
1685 )
1687 # Build dataset for splits
1688 keep_in_memory = (
1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1690 )
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:1221, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1220 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--github-code/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817/github-code.py:169, in GithubCode._split_generators(self, dl_manager)
162 def _split_generators(self, dl_manager):
164 hfh_dataset_info = HfApi(datasets.config.HF_ENDPOINT).dataset_info(
165 _REPO_NAME,
166 timeout=100.0,
167 )
--> 169 patterns = datasets.data_files.get_patterns_in_dataset_repository(hfh_dataset_info)
170 data_files = datasets.data_files.DataFilesDict.from_hf_repo(
171 patterns,
172 dataset_info=hfh_dataset_info,
173 )
175 files = dl_manager.download_and_extract(data_files["train"])
TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4610/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4609/comments | https://api.github.com/repos/huggingface/datasets/issues/4609/events | https://github.com/huggingface/datasets/issues/4609 | 1,290,392,083 | I_kwDODunzps5M6dIT | 4,609 | librispeech dataset has to download whole subset when specifing the split to use | {
"login": "sunhaozhepy",
"id": 73462159,
"node_id": "MDQ6VXNlcjczNDYyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/73462159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunhaozhepy",
"html_url": "https://github.com/sunhaozhepy",
"followers_url": "https://api.github.com/users/sunhaozhepy/followers",
"following_url": "https://api.github.com/users/sunhaozhepy/following{/other_user}",
"gists_url": "https://api.github.com/users/sunhaozhepy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunhaozhepy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunhaozhepy/subscriptions",
"organizations_url": "https://api.github.com/users/sunhaozhepy/orgs",
"repos_url": "https://api.github.com/users/sunhaozhepy/repos",
"events_url": "https://api.github.com/users/sunhaozhepy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunhaozhepy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [
"Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how our dataset scripts are structured.",
"Hi,\r\n\r\nThat's a great help. Thank you very much."
] | 1,656,607,104,000 | 1,657,662,272,000 | 1,657,662,272,000 | NONE | null | ## Describe the bug
librispeech dataset has to download whole subset when specifing the split to use
## Steps to reproduce the bug
see below
# Sample code to reproduce the bug
```
!pip install datasets
from datasets import load_dataset
raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100")
```
## Expected results
The split "train.clean.100" is downloaded.
## Actual results
All four splits in "clean" subset is downloaded.
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4609/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4608/comments | https://api.github.com/repos/huggingface/datasets/issues/4608/events | https://github.com/huggingface/datasets/pull/4608 | 1,290,298,002 | PR_kwDODunzps46pm9A | 4,608 | Fix xisfile, xgetsize, xisdir, xlistdir in private repo | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added tests for xisfile, xgetsize, xlistdir and xglob for private repos, and also tests for xwalk that was untested"
] | 1,656,602,601,000 | 1,657,111,559,000 | 1,657,110,859,000 | MEMBER | null | `xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip://a.txt::https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. However it's not working when passing a simple file `https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`.
This is because the authentication headers are not passed correctly in this case.
This is causing dataset streaming to fail in private parquet repositories, as noted in https://github.com/huggingface/datasets/issues/4605
I fixed `xisfile` and the other functions that behave the same way: xgetsize, xisdir and xlistdir
TODO:
- [x] tests
fix https://github.com/huggingface/datasets/issues/4605 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4608/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4608",
"html_url": "https://github.com/huggingface/datasets/pull/4608",
"diff_url": "https://github.com/huggingface/datasets/pull/4608.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4608.patch",
"merged_at": "2022-07-06T12:34:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4607 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4607/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4607/comments | https://api.github.com/repos/huggingface/datasets/issues/4607/events | https://github.com/huggingface/datasets/pull/4607 | 1,290,171,941 | PR_kwDODunzps46pLnd | 4,607 | Align more metadata with other repo types (models,spaces) | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I just set a default value (None) for the deprecated licenses and languages fields, which should fix most of the CI failures.\r\n\r\nNote that the CI should still be red because you edited many dataset cards and they're still missing some content - but this is unrelated to this PR so we can ignore these failures",
"thanks so much @lhoestq !!",
"There's also a follow-up PR to this one, in #4613 – I would suggest to merge all of them at the same time and hope not too many things are broken 🙀 🙀 ",
"Alright merging this one now, let's see how broken things get"
] | 1,656,597,132,000 | 1,656,676,837,000 | 1,656,676,154,000 | MEMBER | null | see also associated PR on the `datasets-tagging` Space: https://huggingface.co/spaces/huggingface/datasets-tagging/discussions/2 (to merge after this one is merged) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4607/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4607",
"html_url": "https://github.com/huggingface/datasets/pull/4607",
"diff_url": "https://github.com/huggingface/datasets/pull/4607.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4607.patch",
"merged_at": "2022-07-01T11:49:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4606 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4606/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4606/comments | https://api.github.com/repos/huggingface/datasets/issues/4606/events | https://github.com/huggingface/datasets/issues/4606 | 1,290,083,534 | I_kwDODunzps5M5RzO | 4,606 | evaluation result changes after `datasets` version change | {
"login": "thnkinbtfly",
"id": 70014488,
"node_id": "MDQ6VXNlcjcwMDE0NDg4",
"avatar_url": "https://avatars.githubusercontent.com/u/70014488?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thnkinbtfly",
"html_url": "https://github.com/thnkinbtfly",
"followers_url": "https://api.github.com/users/thnkinbtfly/followers",
"following_url": "https://api.github.com/users/thnkinbtfly/following{/other_user}",
"gists_url": "https://api.github.com/users/thnkinbtfly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thnkinbtfly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thnkinbtfly/subscriptions",
"organizations_url": "https://api.github.com/users/thnkinbtfly/orgs",
"repos_url": "https://api.github.com/users/thnkinbtfly/repos",
"events_url": "https://api.github.com/users/thnkinbtfly/events{/privacy}",
"received_events_url": "https://api.github.com/users/thnkinbtfly/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | [
"Hi! The GH/no-namespace datasets versioning is synced with the version of the `datasets` lib, which means that the `wikiann` script was modified between the two compared versions. In this scenario, you can ensure reproducibility by pinning the script version, which is done by passing `revision=\"x.y.z\"` (e.g. `revision=\"2.2.0\"`) to `load_dataset.`\r\n"
] | 1,656,593,006,000 | 1,656,956,852,000 | null | NONE | null | ## Describe the bug
evaluation result changes after `datasets` version change
## Steps to reproduce the bug
1. Train a model on WikiAnn
2. reload the ckpt -> test accuracy becomes same as eval accuracy
3. such behavior is gone after downgrading `datasets`
https://colab.research.google.com/drive/1kYz7-aZRGdayaq-gDTt30tyEgsKlpYOw?usp=sharing
## Expected results
evaluation result shouldn't change before/after `datasets` version changes
## Actual results
evaluation result changes before/after `datasets` version changes
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: colab
- Python version: 3.7.13
- PyArrow version: 6.0.1
Q. How could the evaluation result change before/after `datasets` version changes? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4606/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4605/comments | https://api.github.com/repos/huggingface/datasets/issues/4605/events | https://github.com/huggingface/datasets/issues/4605 | 1,290,058,970 | I_kwDODunzps5M5Lza | 4,605 | Dataset Viewer issue for boris/gis_filtered | {
"login": "WaterKnight1998",
"id": 41203448,
"node_id": "MDQ6VXNlcjQxMjAzNDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WaterKnight1998",
"html_url": "https://github.com/WaterKnight1998",
"followers_url": "https://api.github.com/users/WaterKnight1998/followers",
"following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}",
"gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions",
"organizations_url": "https://api.github.com/users/WaterKnight1998/orgs",
"repos_url": "https://api.github.com/users/WaterKnight1998/repos",
"events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/WaterKnight1998/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Yes, this dataset is \"gated\": you first have to go to https://huggingface.co/datasets/boris/gis_filtered and click \"Access repository\" (if you accept to share your contact information with the repository authors).",
"I already did that, it returns error when using streaming",
"Oh, sorry, I misread. Looking at it. Maybe @huggingface/datasets or @SBrandeis ",
"I could reproduce the error, even though I provided my token and accepted the gate form. It looks like an error from `datasets`",
"This is indeed a bug in `datasets`. Parquet datasets in gated/private repositories can't be streamed properly, which caused the viewer to fail. I opened a PR at https://github.com/huggingface/datasets/pull/4608"
] | 1,656,591,814,000 | 1,657,110,859,000 | 1,657,110,859,000 | NONE | null | ### Link
https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train
### Description
When I try to access this from the website I get this error:
Status code: 400
Exception: ClientResponseError
Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/boris/gis_filtered/resolve/80b805053ce61d4eb487b6b8d9095d775c2c466e/data/train/0000.parquet')
If I try to load with code I also get the same issue:
```python
dataset2_train=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"],split="train",streaming=True)
dataset2_validation=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"], split="validation",streaming=True)
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4605/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4604/comments | https://api.github.com/repos/huggingface/datasets/issues/4604/events | https://github.com/huggingface/datasets/pull/4604 | 1,289,963,962 | PR_kwDODunzps46oeju | 4,604 | Update CI Windows orb | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,586,831,000 | 1,656,595,991,000 | 1,656,595,346,000 | MEMBER | null | This PR tries to fix recurrent random CI failures on Windows.
After 2 runs, it seems to have fixed the issue.
Fix #4603. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4604/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4604",
"html_url": "https://github.com/huggingface/datasets/pull/4604",
"diff_url": "https://github.com/huggingface/datasets/pull/4604.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4604.patch",
"merged_at": "2022-06-30T13:22:25"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4603/comments | https://api.github.com/repos/huggingface/datasets/issues/4603/events | https://github.com/huggingface/datasets/issues/4603 | 1,289,963,331 | I_kwDODunzps5M40dD | 4,603 | CI fails recurrently and randomly on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [] | 1,656,586,798,000 | 1,656,595,345,000 | 1,656,595,345,000 | MEMBER | null | As reported by @lhoestq,
The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs:
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4603/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4602/comments | https://api.github.com/repos/huggingface/datasets/issues/4602/events | https://github.com/huggingface/datasets/pull/4602 | 1,289,950,379 | PR_kwDODunzps46obqi | 4,602 | Upgrade setuptools in windows CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,586,121,000 | 1,656,593,858,000 | 1,656,593,177,000 | MEMBER | null | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
```
hopefully this fixes the issue
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4602/timeline | null | null | 1 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4602",
"html_url": "https://github.com/huggingface/datasets/pull/4602",
"diff_url": "https://github.com/huggingface/datasets/pull/4602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4602.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4601 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4601/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4601/comments | https://api.github.com/repos/huggingface/datasets/issues/4601/events | https://github.com/huggingface/datasets/pull/4601 | 1,289,924,715 | PR_kwDODunzps46oWF8 | 4,601 | Upgrade pip in WIN CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"It failed terribly"
] | 1,656,584,742,000 | 1,656,586,465,000 | 1,656,585,818,000 | MEMBER | null | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
```
I tried to update pip and re-run the CI several times and I couldn't re-experience this issue for now, so I think upgrading pip may solve the issue | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4601/timeline | null | null | 1 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4601",
"html_url": "https://github.com/huggingface/datasets/pull/4601",
"diff_url": "https://github.com/huggingface/datasets/pull/4601.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4601.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4600/comments | https://api.github.com/repos/huggingface/datasets/issues/4600/events | https://github.com/huggingface/datasets/pull/4600 | 1,289,177,042 | PR_kwDODunzps46l3P1 | 4,600 | Remove multiple config section | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,529,761,000 | 1,656,956,480,000 | 1,656,955,781,000 | MEMBER | null | This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4600/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4600",
"html_url": "https://github.com/huggingface/datasets/pull/4600",
"diff_url": "https://github.com/huggingface/datasets/pull/4600.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4600.patch",
"merged_at": "2022-07-04T17:29:41"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4599/comments | https://api.github.com/repos/huggingface/datasets/issues/4599/events | https://github.com/huggingface/datasets/pull/4599 | 1,288,849,933 | PR_kwDODunzps46kvfC | 4,599 | Smooth-BLEU bug fixed | {
"login": "Aktsvigun",
"id": 36672861,
"node_id": "MDQ6VXNlcjM2NjcyODYx",
"avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aktsvigun",
"html_url": "https://github.com/Aktsvigun",
"followers_url": "https://api.github.com/users/Aktsvigun/followers",
"following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}",
"gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions",
"organizations_url": "https://api.github.com/users/Aktsvigun/orgs",
"repos_url": "https://api.github.com/users/Aktsvigun/repos",
"events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aktsvigun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4190228726,
"node_id": "LA_kwDODunzps75wdD2",
"url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate",
"name": "transfer-to-evaluate",
"color": "E3165C",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Thanks @Aktsvigun for your fix.\r\n\r\nHowever, metrics in `datasets` are in deprecation mode:\r\n- #4739\r\n\r\nYou should transfer this PR to the `evaluate` library: https://github.com/huggingface/evaluate\r\n\r\nJust for context, here the link to the PR by @Aktsvigun on tensorflow/nmt:\r\n- https://github.com/tensorflow/nmt/pull/488"
] | 1,656,514,302,000 | 1,663,918,960,000 | 1,663,918,960,000 | NONE | null | Hi,
the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image).
This however contradicts the source paper suggesting the smooth-BLEU _(Chin-Yew Lin, Franz Josef Och. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. COLING 2004.)_ :
> Add one count to the n-gram hit and total ngram count for n > 1. Therefore, for candidate translations with less than n words, they can still get a positive smoothed BLEU score from shorter n-gram matches; however if nothing matches then they will get zero scores.
This pull request aims at fixing this bug.
I made a pull request in the target repository `tensorflow/nmt`, which implements this script, yet the last commit there is dating 19.02.2019 and I doubt whether this will be fixed promptly. Yet, this bug is critical, for instance for summarization datasets with short summaries (e.g. AESLC), since smoothing needs to be applied there. Therefore, the easiest solution that I found is to fork the repo and download this script directly from the forked fixed repo.
Kind,
Akim Tsvigun
<img width="516" alt="Снимок экрана 2022-06-29 в 17 49 27" src="https://user-images.githubusercontent.com/36672861/176466935-ac579e6d-6a93-4111-ab41-9b33056e7d47.png">
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4599/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4599",
"html_url": "https://github.com/huggingface/datasets/pull/4599",
"diff_url": "https://github.com/huggingface/datasets/pull/4599.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4599.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4598/comments | https://api.github.com/repos/huggingface/datasets/issues/4598/events | https://github.com/huggingface/datasets/pull/4598 | 1,288,774,514 | PR_kwDODunzps46kfOS | 4,598 | Host financial_phrasebank data on the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,511,171,000 | 1,656,668,474,000 | 1,656,667,776,000 | MEMBER | null |
Fix #4597. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4598/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4598",
"html_url": "https://github.com/huggingface/datasets/pull/4598",
"diff_url": "https://github.com/huggingface/datasets/pull/4598.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4598.patch",
"merged_at": "2022-07-01T09:29:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4597/comments | https://api.github.com/repos/huggingface/datasets/issues/4597/events | https://github.com/huggingface/datasets/issues/4597 | 1,288,672,007 | I_kwDODunzps5Mz5MH | 4,597 | Streaming issue for financial_phrasebank | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4069435429,
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive",
"name": "hosted-on-google-drive",
"color": "8B51EF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @huggingface/datasets: it seems like https://www.researchgate.net/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)",
"Let's see if their license allows hosting their data on the Hub.",
"License is Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0).\r\n\r\nWe can host their data on the Hub."
] | 1,656,506,743,000 | 1,656,667,776,000 | 1,656,667,776,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train
### Description
As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dataset:
```
Server error
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with ConnectionError
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4597/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4596/comments | https://api.github.com/repos/huggingface/datasets/issues/4596/events | https://github.com/huggingface/datasets/issues/4596 | 1,288,381,735 | I_kwDODunzps5MyyUn | 4,596 | Dataset Viewer issue for universal_dependencies | {
"login": "Jordy-VL",
"id": 16034009,
"node_id": "MDQ6VXNlcjE2MDM0MDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/16034009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jordy-VL",
"html_url": "https://github.com/Jordy-VL",
"followers_url": "https://api.github.com/users/Jordy-VL/followers",
"following_url": "https://api.github.com/users/Jordy-VL/following{/other_user}",
"gists_url": "https://api.github.com/users/Jordy-VL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jordy-VL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jordy-VL/subscriptions",
"organizations_url": "https://api.github.com/users/Jordy-VL/orgs",
"repos_url": "https://api.github.com/users/Jordy-VL/repos",
"events_url": "https://api.github.com/users/Jordy-VL/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jordy-VL/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks, looking at it!",
"Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-09-07 à 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/188867795-4f7dd438-d4f2-46cd-8a92-20a37fb2d6bc.png\">\r\n"
] | 1,656,492,629,000 | 1,662,550,168,000 | 1,662,550,167,000 | NONE | null | ### Link
https://huggingface.co/datasets/universal_dependencies
### Description
invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4596/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4596/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4595/comments | https://api.github.com/repos/huggingface/datasets/issues/4595/events | https://github.com/huggingface/datasets/issues/4595 | 1,288,275,976 | I_kwDODunzps5MyYgI | 4,595 | Dataset Viewer issue with False positive PII redaction | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/cakiki/rosetta-code/discussions\r\n",
"This was indeed a scraping issue which I assumed was a display issue; sorry about that!"
] | 1,656,486,957,000 | 1,656,491,381,000 | 1,656,491,269,000 | CONTRIBUTOR | null | ### Link
https://huggingface.co/datasets/cakiki/rosetta-code
### Description
Hello, I just noticed an entry being redacted that shouldn't have been:
`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4595/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4594/comments | https://api.github.com/repos/huggingface/datasets/issues/4594/events | https://github.com/huggingface/datasets/issues/4594 | 1,288,070,023 | I_kwDODunzps5MxmOH | 4,594 | load_from_disk suggests incorrect fix when used to load DatasetDict | {
"login": "dvsth",
"id": 11157811,
"node_id": "MDQ6VXNlcjExMTU3ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvsth",
"html_url": "https://github.com/dvsth",
"followers_url": "https://api.github.com/users/dvsth/followers",
"following_url": "https://api.github.com/users/dvsth/following{/other_user}",
"gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvsth/subscriptions",
"organizations_url": "https://api.github.com/users/dvsth/orgs",
"repos_url": "https://api.github.com/users/dvsth/repos",
"events_url": "https://api.github.com/users/dvsth/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvsth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [] | 1,656,466,801,000 | 1,656,475,424,000 | 1,656,475,424,000 | NONE | null | Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4594/timeline | null | not_planned | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4593 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4593/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4593/comments | https://api.github.com/repos/huggingface/datasets/issues/4593/events | https://github.com/huggingface/datasets/pull/4593 | 1,288,067,699 | PR_kwDODunzps46iIkn | 4,593 | Fix error message when using load_from_disk to load DatasetDict | {
"login": "dvsth",
"id": 11157811,
"node_id": "MDQ6VXNlcjExMTU3ODEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dvsth",
"html_url": "https://github.com/dvsth",
"followers_url": "https://api.github.com/users/dvsth/followers",
"following_url": "https://api.github.com/users/dvsth/following{/other_user}",
"gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvsth/subscriptions",
"organizations_url": "https://api.github.com/users/dvsth/orgs",
"repos_url": "https://api.github.com/users/dvsth/repos",
"events_url": "https://api.github.com/users/dvsth/events{/privacy}",
"received_events_url": "https://api.github.com/users/dvsth/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,656,466,467,000 | 1,656,475,319,000 | 1,656,475,299,000 | NONE | null | Issue #4594
Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error.
Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`.
Changes: Change the suggestion to say "Please use `datasets.dataset_dict.load_from_disk` instead." | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4593/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4593",
"html_url": "https://github.com/huggingface/datasets/pull/4593",
"diff_url": "https://github.com/huggingface/datasets/pull/4593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4593.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4592/comments | https://api.github.com/repos/huggingface/datasets/issues/4592/events | https://github.com/huggingface/datasets/issues/4592 | 1,288,029,377 | I_kwDODunzps5MxcTB | 4,592 | Issue with jalFaizy/detect_chess_pieces when running datasets-cli test | {
"login": "faizankshaikh",
"id": 8406903,
"node_id": "MDQ6VXNlcjg0MDY5MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faizankshaikh",
"html_url": "https://github.com/faizankshaikh",
"followers_url": "https://api.github.com/users/faizankshaikh/followers",
"following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}",
"gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions",
"organizations_url": "https://api.github.com/users/faizankshaikh/orgs",
"repos_url": "https://api.github.com/users/faizankshaikh/repos",
"events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}",
"received_events_url": "https://api.github.com/users/faizankshaikh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues/questions/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https://huggingface.co/blog/community-update\r\n- Docs: https://huggingface.co/docs/hub/repositories-pull-requests-discussions\r\n\r\nThe Discussion tab for your \"jalFaizy/detect_chess_pieces\" dataset is here: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions\r\nYou can use it to ask for help by pinging the Datasets maintainers: see our docs here: https://huggingface.co/docs/datasets/master/en/share#ask-for-a-help-and-reviews\r\n\r\nI'm transferring this discussion to your Discussion tab and trying to address it: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/1",
"Thank you @albertvillanova , I will keep that in mind.\r\n\r\nJust a quick note - I posted the issue on Github because the dataset viewer suggested me to \"open an issue for direct support\". Maybe it can be updated with your suggestion\r\n\r\n![image](https://user-images.githubusercontent.com/8406903/176397633-7b077d81-2044-4487-b58e-6346b05be5cf.png)\r\n\r\n\r\n",
"Thank you pointing this out: yes, definitely, we should fix the error message. We are working on this."
] | 1,656,461,754,000 | 1,656,498,603,000 | 1,656,488,967,000 | NONE | null | ### Link
https://huggingface.co/datasets/jalFaizy/detect_chess_pieces
### Description
I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py)
When I run the command
`$ datasets-cli test "D:\workspace\HF\detect_chess_pieces" --save_infos --all_configs`
It gives the following error
```
Using custom data configuration default
Traceback (most recent call last):
File "c:\users\faiza\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\faiza\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\faiza\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 7, in <module>
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\datasets_cli.py", line 39, in main
service.run()
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 132, in run
for j, builder in enumerate(get_builders()):
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 125, in get_builders
yield builder_cls(
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 1148, in __init__
super().__init__(*args, **kwargs)
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 306, in __init__
info = self.get_exported_dataset_info()
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 405, in get_exported_dataset_info
return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo())
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 390, in get_all_exported_dataset_infos
return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 309, in from_directory
dataset_infos_dict = {
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 310, in <dictcomp>
config_name: DatasetInfo.from_dict(dataset_info_dict)
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 272, in from_dict
return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names})
File "<string>", line 20, in __init__
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 160, in __post_init__
templates = [
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 161, in <listcomp>
template if isinstance(template, TaskTemplate) else task_template_from_dict(template)
File "c:\users\faiza\anaconda3\lib\site-packages\datasets\tasks\__init__.py", line 43, in task_template_from_dict
return template.from_dict(task_template_dict)
AttributeError: 'NoneType' object has no attribute 'from_dict'
```
My assumption is that there is some kind of issue in how the "task_templates" are read, because even if I keep them as None, or not include the argument at all, the same error occurs
### Owner
Yes | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4592/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4591/comments | https://api.github.com/repos/huggingface/datasets/issues/4591/events | https://github.com/huggingface/datasets/issues/4591 | 1,288,021,332 | I_kwDODunzps5MxaVU | 4,591 | Can't push Images to hub with manual Dataset | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, thanks for reporting! This issue stems from the changes introduced in https://github.com/huggingface/datasets/pull/4282 (cc @lhoestq), in which list casts are ignored if they don't change the list type (required to preserve `null` values). And `push_to_hub` does a special cast to embed external image files but doesn't change the types, hence the failure."
] | 1,656,460,883,000 | 1,657,281,696,000 | 1,657,281,695,000 | CONTRIBUTOR | null | ## Describe the bug
If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed,
instead it looks for image where image local path is/used to be.
This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is complicated.
This happens even though the dataset is looking like decoded images:
![image](https://user-images.githubusercontent.com/15624271/176322689-2cc819cf-9d5c-4a8f-9f3d-83ae8ec06f20.png)
and I use `embed_external_files=True` while `push_to_hub` (same with false)
## Steps to reproduce the bug
```python
from PIL import Image
from datasets import Image as ImageFeature
from datasets import Features,Dataset
#manually create dataset
feats=Features(
{
"images": [ImageFeature()], #same even if explicitly ImageFeature(decode=True)
"input_image": ImageFeature(),
}
)
test_data={"images":[[Image.open("test.jpg"),Image.open("test.jpg"),Image.open("test.jpg")]], "input_image":[Image.open("test.jpg")]}
test_dataset=Dataset.from_dict(test_data,features=feats)
print(test_dataset)
test_dataset.push_to_hub("ceyda/image_test_public",private=False,token="",embed_external_files=True)
# clear cache rm -r ~/.cache/huggingface
# remove "test.jpg" # remove to see that it is looking for image on the local path
test_dataset=load_dataset("ceyda/image_test_public",use_auth_token="")
print(test_dataset)
print(test_dataset['train'][0])
```
## Expected results
should be able to push image bytes if dataset has `Image(decode=True)`
## Actual results
errors because it is trying to decode file from the non existing local path.
```
----> print(test_dataset['train'][0])
File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)
2152 def __getitem__(self, key): # noqa: F811
2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2154 return self._getitem(
2155 key,
2156 )
File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)
2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2139 formatted_output = format_table(
2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2141 )
2142 return formatted_output
File ~/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
...
-> 3068 fp = builtins.open(filename, "rb")
3069 exclusive_fp = True
3071 try:
FileNotFoundError: [Errno 2] No such file or directory: 'test.jpg'
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4591/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4590/comments | https://api.github.com/repos/huggingface/datasets/issues/4590/events | https://github.com/huggingface/datasets/pull/4590 | 1,287,941,058 | PR_kwDODunzps46htv0 | 4,590 | Generalize meta_path json file creation in load.py [#4540] | {
"login": "VijayKalmath",
"id": 20517962,
"node_id": "MDQ6VXNlcjIwNTE3OTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VijayKalmath",
"html_url": "https://github.com/VijayKalmath",
"followers_url": "https://api.github.com/users/VijayKalmath/followers",
"following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}",
"gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions",
"organizations_url": "https://api.github.com/users/VijayKalmath/orgs",
"repos_url": "https://api.github.com/users/VijayKalmath/repos",
"events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}",
"received_events_url": "https://api.github.com/users/VijayKalmath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova, Can you please review this PR for Issue #4540 ",
"@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningful contributions.",
"Hi ! Sure feel free to join our discord ^^ \r\nhttps://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 so that we can discuss together mor eeasily. Otherwise everything happens on github ;)"
] | 1,656,452,886,000 | 1,657,292,113,000 | 1,657,199,865,000 | CONTRIBUTOR | null | # What does this PR do?
## Summary
*In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.*
## Additions
-
## Changes
- Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code.
## Deletions
-
## Issues Addressed :
Fixes #4540 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4590/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4590",
"html_url": "https://github.com/huggingface/datasets/pull/4590",
"diff_url": "https://github.com/huggingface/datasets/pull/4590.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4590.patch",
"merged_at": "2022-07-07T13:17:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4589/comments | https://api.github.com/repos/huggingface/datasets/issues/4589/events | https://github.com/huggingface/datasets/issues/4589 | 1,287,600,029 | I_kwDODunzps5Mvzed | 4,589 | Permission denied: '/home/.cache' when load_dataset with local script | {
"login": "jiangh0",
"id": 24559732,
"node_id": "MDQ6VXNlcjI0NTU5NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/24559732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangh0",
"html_url": "https://github.com/jiangh0",
"followers_url": "https://api.github.com/users/jiangh0/followers",
"following_url": "https://api.github.com/users/jiangh0/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangh0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangh0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangh0/subscriptions",
"organizations_url": "https://api.github.com/users/jiangh0/orgs",
"repos_url": "https://api.github.com/users/jiangh0/repos",
"events_url": "https://api.github.com/users/jiangh0/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangh0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [] | 1,656,433,563,000 | 1,656,483,988,000 | 1,656,483,908,000 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4589/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4588/comments | https://api.github.com/repos/huggingface/datasets/issues/4588/events | https://github.com/huggingface/datasets/pull/4588 | 1,287,368,751 | PR_kwDODunzps46f2kF | 4,588 | Host head_qa data on the Hub and fix NonMatchingChecksumError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks 🙏 ",
"@younesbelkada we have just merged this PR."
] | 1,656,423,568,000 | 1,657,036,875,000 | 1,657,036,192,000 | MEMBER | null | This PR:
- Hosts head_qa data on the Hub instead of Google Drive
- Fixes NonMatchingChecksumError
Fix https://huggingface.co/datasets/head_qa/discussions/1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4588/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4588/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4588",
"html_url": "https://github.com/huggingface/datasets/pull/4588",
"diff_url": "https://github.com/huggingface/datasets/pull/4588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4588.patch",
"merged_at": "2022-07-05T15:49:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4587/comments | https://api.github.com/repos/huggingface/datasets/issues/4587/events | https://github.com/huggingface/datasets/pull/4587 | 1,287,291,494 | PR_kwDODunzps46flzR | 4,587 | Validate new_fingerprint passed by user | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,420,381,000 | 1,656,425,517,000 | 1,656,424,844,000 | MEMBER | null | Users can pass the dataset fingerprint they want in `map` and other dataset transforms.
However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4587/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4587",
"html_url": "https://github.com/huggingface/datasets/pull/4587",
"diff_url": "https://github.com/huggingface/datasets/pull/4587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4587.patch",
"merged_at": "2022-06-28T14:00:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4586/comments | https://api.github.com/repos/huggingface/datasets/issues/4586/events | https://github.com/huggingface/datasets/pull/4586 | 1,287,105,636 | PR_kwDODunzps46e9xB | 4,586 | Host pn_summary data on the Hub instead of Google Drive | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,410,705,000 | 1,656,427,976,000 | 1,656,427,323,000 | MEMBER | null | Fix #4581. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4586/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4586",
"html_url": "https://github.com/huggingface/datasets/pull/4586",
"diff_url": "https://github.com/huggingface/datasets/pull/4586.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4586.patch",
"merged_at": "2022-06-28T14:42:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4585/comments | https://api.github.com/repos/huggingface/datasets/issues/4585/events | https://github.com/huggingface/datasets/pull/4585 | 1,287,064,929 | PR_kwDODunzps46e1Ne | 4,585 | Host multi_news data on the Hub instead of Google Drive | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,408,726,000 | 1,656,425,975,000 | 1,656,425,328,000 | MEMBER | null | Host data files of multi_news dataset on the Hub.
They were on Google Drive.
Fix #4580. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4585/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4585",
"html_url": "https://github.com/huggingface/datasets/pull/4585",
"diff_url": "https://github.com/huggingface/datasets/pull/4585.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4585.patch",
"merged_at": "2022-06-28T14:08:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4584/comments | https://api.github.com/repos/huggingface/datasets/issues/4584/events | https://github.com/huggingface/datasets/pull/4584 | 1,286,911,993 | PR_kwDODunzps46eVF7 | 4,584 | Add binary classification task IDs | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.",
"> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where we define the cross libraries tasks taxonomy ;)\r\n\r\nThanks for the tip! Done in https://github.com/huggingface/hub-docs/pull/217",
"I don't think we need to update this file anymore. We should remove it IMO, and simply update the dataset [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging)",
"I'm closing this PR."
] | 1,656,401,439,000 | 1,674,725,273,000 | 1,674,725,272,000 | MEMBER | null | As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification.
This PR adds binary classification to the task IDs to enable this.
Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597
cc @abhishekkrthakur @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4584/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4584",
"html_url": "https://github.com/huggingface/datasets/pull/4584",
"diff_url": "https://github.com/huggingface/datasets/pull/4584.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4584.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4583/comments | https://api.github.com/repos/huggingface/datasets/issues/4583/events | https://github.com/huggingface/datasets/pull/4583 | 1,286,790,871 | PR_kwDODunzps46d7xo | 4,583 | <code> implementation of FLAC support using torchaudio | {
"login": "rafael-ariascalles",
"id": 45745870,
"node_id": "MDQ6VXNlcjQ1NzQ1ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/45745870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafael-ariascalles",
"html_url": "https://github.com/rafael-ariascalles",
"followers_url": "https://api.github.com/users/rafael-ariascalles/followers",
"following_url": "https://api.github.com/users/rafael-ariascalles/following{/other_user}",
"gists_url": "https://api.github.com/users/rafael-ariascalles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafael-ariascalles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafael-ariascalles/subscriptions",
"organizations_url": "https://api.github.com/users/rafael-ariascalles/orgs",
"repos_url": "https://api.github.com/users/rafael-ariascalles/repos",
"events_url": "https://api.github.com/users/rafael-ariascalles/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafael-ariascalles/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,656,393,861,000 | 1,656,395,222,000 | 1,656,395,222,000 | NONE | null | I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4583/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4583",
"html_url": "https://github.com/huggingface/datasets/pull/4583",
"diff_url": "https://github.com/huggingface/datasets/pull/4583.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4583.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4582/comments | https://api.github.com/repos/huggingface/datasets/issues/4582/events | https://github.com/huggingface/datasets/pull/4582 | 1,286,517,060 | PR_kwDODunzps46dC59 | 4,582 | add_column should preserve _indexes | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4582). All of your documentation changes will be reflected on that endpoint."
] | 1,656,369,347,000 | 1,657,120,794,000 | null | CONTRIBUTOR | null | https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126
doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case.
This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init.
with this PR now can pass 'indexes' on init through `IndexableMixin`
- [x] Added test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4582/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4582",
"html_url": "https://github.com/huggingface/datasets/pull/4582",
"diff_url": "https://github.com/huggingface/datasets/pull/4582.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4582.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4581/comments | https://api.github.com/repos/huggingface/datasets/issues/4581/events | https://github.com/huggingface/datasets/issues/4581 | 1,286,362,907 | I_kwDODunzps5MrFcb | 4,581 | Dataset Viewer issue for pn_summary | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?",
"Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https://doc-14-4c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/pgotjmcuh77q0lk7p44rparfrhv459kp/1656403650000/11771870722949762109/*/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n",
"Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else."
] | 1,656,363,372,000 | 1,656,427,323,000 | 1,656,427,323,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation
### Description
Getting an index error on the `validation` and `test` splits:
```
Server error
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4581/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4580/comments | https://api.github.com/repos/huggingface/datasets/issues/4580/events | https://github.com/huggingface/datasets/issues/4580 | 1,286,312,912 | I_kwDODunzps5Mq5PQ | 4,580 | Dataset Viewer issue for multi_news | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.",
"I guess we can host the data: https://github.com/Alex-Fabbri/Multi-News/blob/master/LICENSE.txt"
] | 1,656,361,525,000 | 1,656,425,328,000 | 1,656,425,328,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/multi_news
### Description
Not sure what the index error is referring to here:
```
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4580/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4579/comments | https://api.github.com/repos/huggingface/datasets/issues/4579/events | https://github.com/huggingface/datasets/pull/4579 | 1,286,106,285 | PR_kwDODunzps46bo2h | 4,579 | Support streaming cfq dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either yield from buffer\r\n - or iterate over samples and either yield or buffer the sample\r\n \r\n The speed gain obviously depends on how the indexes are sorted in the split file:\r\n - Best case: indices are [1, 2, 3]\r\n - Worst case (no speed gain): indices are [3, 1, 2] or [3, 2, 1]\r\n\r\nLet me know what you think.",
"I have to update the dummy data so that it aligns with the real data (inside the archive, the samples file `dataset.json` is the last member).",
"There is an issue when testing `test_load_dataset_cfq` with dummy data:\r\n- `MockDownloadManager.iter_archive` yields FIRST `'cfq/dataset.json'`\r\n- [`Streaming`]`DownloadManager.iter_archive` yields LAST `'cfq/dataset.json'` when using real data tar.gz archive\r\n\r\nNote that this issue arises only with dummy data: loading the real dataset works smoothly for all configurations: I recreated the `dataset_infos.json` file to check it (it generated the same file).",
"This PR should be merged first:\r\n- #4611",
"Impressive, thank you ! :o \r\n\r\nfeel free to merge master into this branch, now that the files order is respected. You can merge if the CI is green :)"
] | 1,656,349,883,000 | 1,656,963,301,000 | 1,656,962,637,000 | MEMBER | null | Support streaming cfq dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4579/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4579",
"html_url": "https://github.com/huggingface/datasets/pull/4579",
"diff_url": "https://github.com/huggingface/datasets/pull/4579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4579.patch",
"merged_at": "2022-07-04T19:23:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4578/comments | https://api.github.com/repos/huggingface/datasets/issues/4578/events | https://github.com/huggingface/datasets/issues/4578 | 1,286,086,400 | I_kwDODunzps5MqB8A | 4,578 | [Multi Configs] Use directories to differentiate between subsets/configurations | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | [
"I want to be able to create folders in a model.",
"How to set new split names, instead of train/test/validation? For example, I have a local dataset, consists of several subsets, named \"A\", \"B\", and \"C\". How can I create a huggingface dataset, with splits A/B/C ?\r\n\r\nThe document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?",
"> The document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?\r\n\r\nIt works the same - you just need to use local paths instead of URLs"
] | 1,656,348,911,000 | 1,686,757,385,000 | null | MEMBER | null | Currently to define several subsets/configurations of your dataset, you need to use a dataset script.
However it would be nice to have a no-code way to to this.
For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per configuration.
These structures are not supported right now, but would be nice to have:
```
my_dataset_repository/
├── README.md
├── en/
│ ├── train.csv
│ └── test.csv
└── fr/
├── train.csv
└── test.csv
```
Or with one directory per split:
```
my_dataset_repository/
├── README.md
├── en/
│ ├── train/
│ │ ├── shard_0.csv
│ │ └── shard_1.csv
│ └── test/
│ ├── shard_0.csv
│ └── shard_1.csv
└── fr/
├── train/
│ ├── shard_0.csv
│ └── shard_1.csv
└── test/
├── shard_0.csv
└── shard_1.csv
```
cc @stevhliu @albertvillanova
This can be specified in the README as YAML with
```
configs:
- config_name: en
data_dir: en
- config_name: fr
data_dir: fr
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4578/reactions",
"total_count": 16,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 5,
"eyes": 4
} | https://api.github.com/repos/huggingface/datasets/issues/4578/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4577/comments | https://api.github.com/repos/huggingface/datasets/issues/4577/events | https://github.com/huggingface/datasets/pull/4577 | 1,285,703,775 | PR_kwDODunzps46aTWL | 4,577 | Add authentication tip to `load_dataset` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,656,331,534,000 | 1,656,940,395,000 | 1,656,939,690,000 | CONTRIBUTOR | null | Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4577/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4577",
"html_url": "https://github.com/huggingface/datasets/pull/4577",
"diff_url": "https://github.com/huggingface/datasets/pull/4577.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4577.patch",
"merged_at": "2022-07-04T13:01:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4576/comments | https://api.github.com/repos/huggingface/datasets/issues/4576/events | https://github.com/huggingface/datasets/pull/4576 | 1,285,698,576 | PR_kwDODunzps46aSN_ | 4,576 | Include `metadata.jsonl` in resolved data files | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files and a metadata.jsonl file, it would return \r\n```\r\nsplit: patterns_dict[split] + [METADATA_PATTERN]\r\n```\r\nwhich is a bit unexpected and can lead to errors.\r\n\r\nMaybe this logic can be specific to imagefolder somehow ? This could be an additional pattern `[\"metadata.jsonl\", \"**/metadata.jsonl\"]` just for imagefolder, that is only used when `data_files=` is not specified by the user.\r\n\r\nI guess it's ok to have patterns that lead to duplicate metadata.jsonl files for imagefolder, since the imagefolder logic only considers the closest metadata file for each image.\r\n\r\nWhat do you think ?",
"Yes, that's indeed the problem. My solution in https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 that accounts for that (include metadata files only if image files are present; not ideal): https://github.com/huggingface/datasets/blob/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95/src/datasets/data_files.py#L119-L125.\r\nPerhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as `imagefolder` and append metadata files to already resolved data files (if there are any). WDYT?",
"@lhoestq \r\n\r\n> Perhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as imagefolder and append metadata files to already resolved data files (if there are any). WDYT?\r\n\r\nI decided to go with this approach.\r\n\r\n Not sure if you meant the same thing with this comment:\r\n\r\n> Maybe this logic can be specific to imagefolder somehow ? This could be an additional pattern [\"metadata.jsonl\", \"**/metadata.jsonl\"] just for imagefolder, that is only used when data_files= is not specified by the user.\r\n\r\n\r\nIt adds more code but is easy to follow IMO.\r\n",
"The CI still struggles but you can merge since at least one of the two WIN CI succeeded"
] | 1,656,331,289,000 | 1,656,679,495,000 | 1,656,584,132,000 | CONTRIBUTOR | null | Include `metadata.jsonl` in resolved data files.
Fix #4548
@lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4576/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4576",
"html_url": "https://github.com/huggingface/datasets/pull/4576",
"diff_url": "https://github.com/huggingface/datasets/pull/4576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4576.patch",
"merged_at": "2022-06-30T10:15:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4575/comments | https://api.github.com/repos/huggingface/datasets/issues/4575/events | https://github.com/huggingface/datasets/issues/4575 | 1,285,446,700 | I_kwDODunzps5Mnlws | 4,575 | Problem about wmt17 zh-en dataset | {
"login": "winterfell2021",
"id": 85819194,
"node_id": "MDQ6VXNlcjg1ODE5MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/85819194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winterfell2021",
"html_url": "https://github.com/winterfell2021",
"followers_url": "https://api.github.com/users/winterfell2021/followers",
"following_url": "https://api.github.com/users/winterfell2021/following{/other_user}",
"gists_url": "https://api.github.com/users/winterfell2021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winterfell2021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winterfell2021/subscriptions",
"organizations_url": "https://api.github.com/users/winterfell2021/orgs",
"repos_url": "https://api.github.com/users/winterfell2021/repos",
"events_url": "https://api.github.com/users/winterfell2021/events{/privacy}",
"received_events_url": "https://api.github.com/users/winterfell2021/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"Running into the same error with `wmt17/zh-en`, `wmt18/zh-en` and `wmt19/zh-en`.",
"@albertvillanova @lhoestq Could you take a look at this issue?",
"@winterfell2021 Hi, I wonder where the code you provided should be added. I tried to add them in the `datasets/table.py` in `array_cast` function, however, the 'zh' item is none.",
"I found some 'zh' item is none while 'c[hn]' is not.\r\nSo the code may change to:\r\n```python\r\nif 'c[hn]' in str(array.type):\r\n py_array = array.to_pylist()\r\n data_list = []\r\n for vo in py_array:\r\n tmp = {\r\n 'en': vo['en'],\r\n }\r\n if vo.get('zh'):\r\n tmp['zh'] = vo['zh']\r\n else:\r\n tmp['zh'] = vo['c[hn]']\r\n data_list.append(tmp)\r\n array = pa.array(data_list, type=pa.struct([\r\n pa.field('en', pa.string()),\r\n pa.field('zh', pa.string()),\r\n ]))\r\n```",
"I just pushed a fix, we'll do a new release of `datasets` soon to include this fix. In the meantime you can use the fixed dataset by passing `revision=\"main\"` to `load_dataset`"
] | 1,656,318,942,000 | 1,661,248,862,000 | 1,661,248,821,000 | NONE | null | It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`.
So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception:
```
Traceback (most recent call last):
File "train.py", line 78, in <module>
data = load_dataset(args.dataset, "zh-en")
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1684, in load_dataset
use_auth_token=use_auth_token,
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1221, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1215, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 533, in finalize
self.write_examples_on_file()
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 410, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 503, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 198, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1846, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1756, in array_cast
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
TypeError: Couldn't cast array of type
struct<c[hn]: string, en: string, zh: string>
to
struct<en: string, zh: string>
```
So the solution of this problem is to change the original array manually:
```
if 'c[hn]' in str(array.type):
py_array = array.to_pylist()
data_list = []
for vo in py_array:
tmp = {
'en': vo['en'],
}
if 'zh' not in vo:
tmp['zh'] = vo['c[hn]']
else:
tmp['zh'] = vo['zh']
data_list.append(tmp)
array = pa.array(data_list, type=pa.struct([
pa.field('en', pa.string()),
pa.field('zh', pa.string()),
]))
```
Therefore, maybe a correct version of original casia2015 file need to be updated | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4575/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4574/comments | https://api.github.com/repos/huggingface/datasets/issues/4574/events | https://github.com/huggingface/datasets/pull/4574 | 1,285,380,616 | PR_kwDODunzps46ZOpZ | 4,574 | Support streaming mlsum dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '/home/runner/work/datasets/datasets/tests/conftest.py'.\r\ntests/conftest.py:13: in <module>\r\n import datasets\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>\r\n from .arrow_dataset import Dataset\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_dataset.py:62: in <module>\r\n from .arrow_reader import ArrowReader\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_reader.py:29: in <module>\r\n from .download.download_config import DownloadConfig\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/__init__.py:10: in <module>\r\n from .streaming_download_manager import StreamingDownloadManager\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/streaming_download_manager.py:20: in <module>\r\n from ..filesystems import COMPRESSION_FILESYSTEMS\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/__init__.py:13: in <module>\r\n from .s3filesystem import S3FileSystem # noqa: F401\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py:1: in <module>\r\n import s3fs\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/__init__.py:1: in <module>\r\n from .core import S3FileSystem, S3File\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/core.py:12: in <module>\r\n from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync\r\nE ImportError: cannot import name 'maybe_sync'\r\n```\r\n\r\nThe installed `s3fs` version is too old. What about pinning a min version?",
"Maybe you can try setting the same minimum version as fsspec ? `s3fs>=2021.11.1`",
"Yes, I have checked that they both require to have the same version. \r\n\r\nThe issue then was coming from aiobotocore, boto3, botocore. I have changed them from strict to min version requirements.\r\n> s3fs 2021.11.1 depends on aiobotocore~=2.0.1",
"I have updated all min versions so that they are compatible one with each other. I'm pushing again...",
"Thanks !",
"Nice!"
] | 1,656,315,423,000 | 1,658,410,650,000 | 1,658,407,200,000 | MEMBER | null | Support streaming mlsum dataset.
This PR:
- pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1`
- https://github.com/fsspec/filesystem_spec/pull/830
- unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1`
> s3fs 2021.8.1 requires fsspec==2021.08.1
- see discussion: https://github.com/huggingface/datasets/pull/2858/files#r700027326
- updates the following requirements to be compatible with the previous ones and one with each other:
- `aiobotocore==1.4.2` to `aiobotocore>=2.0.1` (required by s3fs>=2021.11.1)
- `boto3==1.17.106` to `boto3>=1.19.8` (to be compatible with aiobotocore>=2.0.1)
- `botocore==1.20.106` to `botocore>=1.22.8` (to be compatible with aiobotocore and boto3)
Fix #4572. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4574/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4574",
"html_url": "https://github.com/huggingface/datasets/pull/4574",
"diff_url": "https://github.com/huggingface/datasets/pull/4574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4574.patch",
"merged_at": "2022-07-21T12:40:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4573/comments | https://api.github.com/repos/huggingface/datasets/issues/4573/events | https://github.com/huggingface/datasets/pull/4573 | 1,285,023,629 | PR_kwDODunzps46YEEa | 4,573 | Fix evaluation metadata for ncbi_disease | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] | 1,656,275,372,000 | 1,663,926,000,000 | 1,663,925,882,000 | MEMBER | null | This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4573/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4573",
"html_url": "https://github.com/huggingface/datasets/pull/4573",
"diff_url": "https://github.com/huggingface/datasets/pull/4573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4573.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4572/comments | https://api.github.com/repos/huggingface/datasets/issues/4572/events | https://github.com/huggingface/datasets/issues/4572 | 1,285,022,499 | I_kwDODunzps5Ml-Mj | 4,572 | Dataset Viewer issue for mlsum | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."
] | 1,656,275,057,000 | 1,658,407,201,000 | 1,658,407,201,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4572/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4571/comments | https://api.github.com/repos/huggingface/datasets/issues/4571/events | https://github.com/huggingface/datasets/issues/4571 | 1,284,883,289 | I_kwDODunzps5MlcNZ | 4,571 | Dataset Viewer issue for gsarti/flores_101 | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ",
"I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?"
] | 1,656,242,349,000 | 1,662,624,998,000 | null | MEMBER | null | ### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4571/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4570/comments | https://api.github.com/repos/huggingface/datasets/issues/4570/events | https://github.com/huggingface/datasets/issues/4570 | 1,284,846,168 | I_kwDODunzps5MlTJY | 4,570 | Dataset sharding non-contiguous? | {
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [
"This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.",
"Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread 😄 ",
"Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ",
"@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ",
"This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)."
] | 1,656,232,445,000 | 1,656,586,847,000 | 1,656,254,180,000 | CONTRIBUTOR | null | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made.
## Steps to reproduce the bug
```python
max_shard_size = convert_file_size_to_int('300MB')
dataset_nbytes = dataset.data.nbytes
num_shards = int(dataset_nbytes / max_shard_size) + 1
num_shards = max(num_shards, 1)
print(f"{num_shards=}")
for shard_index in range(num_shards):
shard = dataset.shard(num_shards=num_shards, index=shard_index)
shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet")
os.listdir('tokenized/')
```
## Expected results
I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example
## Actual results
Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4570/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4569/comments | https://api.github.com/repos/huggingface/datasets/issues/4569/events | https://github.com/huggingface/datasets/issues/4569 | 1,284,833,694 | I_kwDODunzps5MlQGe | 4,569 | Dataset Viewer issue for sst2 | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nCould you confirm? ",
"Thanks @albertvillanova - it is indeed working now (not sure what caused the error in the first place). Closing this :)"
] | 1,656,228,774,000 | 1,656,311,868,000 | 1,656,311,868,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/sst2
### Description
Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem):
```
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with ConnectionError
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4569/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4568/comments | https://api.github.com/repos/huggingface/datasets/issues/4568/events | https://github.com/huggingface/datasets/issues/4568 | 1,284,655,624 | I_kwDODunzps5MkkoI | 4,568 | XNLI cache reload is very slow | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [
"Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90.png\">\r\nTested on both stable and dev version. ",
"Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.",
"Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https://huggingface.co/docs/datasets/master/en/loading#offline)."
] | 1,656,175,436,000 | 1,656,944,980,000 | 1,656,944,980,000 | CONTRIBUTOR | null | ### Reproduce
Using `2.3.3.dev0`
`from datasets import load_dataset`
`load_dataset("xnli", "en")`
Turn off Internet
`load_dataset("xnli", "en")`
I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet.
```
---------------------------------------------------------------------------
gaierror Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self)
174 conn = connection.create_connection(
--> 175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
/opt/conda/lib/python3.7/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
71
---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
73 af, socktype, proto, canonname, sa = res
/opt/conda/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags)
751 addrlist = []
--> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
753 af, socktype, proto, canonname, sa = res
gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
KeyboardInterrupt Traceback (most recent call last)
/tmp/ipykernel_33/3594208039.py in <module>
----> 1 load_dataset("xnli", "en")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1673 revision=revision,
1674 use_auth_token=use_auth_token,
-> 1675 **config_kwargs,
1676 )
1677
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1494 download_mode=download_mode,
1495 data_dir=data_dir,
-> 1496 data_files=data_files,
1497 )
1498
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1182 download_config=download_config,
1183 download_mode=download_mode,
-> 1184 dynamic_modules_path=dynamic_modules_path,
1185 ).get_module()
1186 elif path.count("/") == 1: # community dataset on the Hub
/opt/conda/lib/python3.7/site-packages/datasets/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path)
506 self.dynamic_modules_path = dynamic_modules_path
507 assert self.name.count("/") == 0
--> 508 increase_load_count(name, resource_type="dataset")
509
510 def download_loading_script(self, revision: Optional[str]) -> str:
/opt/conda/lib/python3.7/site-packages/datasets/load.py in increase_load_count(name, resource_type)
166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS:
167 try:
--> 168 head_hf_s3(name, filename=name + ".py", dataset=(resource_type == "dataset"))
169 except Exception:
170 pass
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries)
93 return http_head(
94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset),
---> 95 max_retries=max_retries,
96 )
97
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries)
445 allow_redirects=allow_redirects,
446 timeout=timeout,
--> 447 max_retries=max_retries,
448 )
449 return response
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
366 tries += 1
367 try:
--> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
369 success = True
370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
/opt/conda/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
/opt/conda/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
527 }
528 send_kwargs.update(settings)
--> 529 resp = self.send(prep, **send_kwargs)
530
531 return resp
/opt/conda/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs)
643
644 # Send the request
--> 645 r = adapter.send(request, **kwargs)
646
647 # Total elapsed time of the request (approximately)
/opt/conda/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 decode_content=False,
449 retries=self.max_retries,
--> 450 timeout=timeout
451 )
452
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
708 body=body,
709 headers=headers,
--> 710 chunked=chunked,
711 )
712
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
384 # Trigger any extra validation we need to do.
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
1038 # Force connect early to allow us to validate the connection.
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1041
1042 if not conn.is_verified:
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in connect(self)
356 def connect(self):
357 # Add certificate verification
--> 358 self.sock = conn = self._new_conn()
359 hostname = self.host
360 tls_in_tls = False
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self)
173 try:
174 conn = connection.create_connection(
--> 175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
177
KeyboardInterrupt:
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4568/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4567/comments | https://api.github.com/repos/huggingface/datasets/issues/4567/events | https://github.com/huggingface/datasets/pull/4567 | 1,284,528,474 | PR_kwDODunzps46Wh0- | 4,567 | Add evaluation data for amazon_reviews_multi | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] | 1,656,150,052,000 | 1,663,925,979,000 | 1,663,925,843,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4567/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4567",
"html_url": "https://github.com/huggingface/datasets/pull/4567",
"diff_url": "https://github.com/huggingface/datasets/pull/4567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4567.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4566/comments | https://api.github.com/repos/huggingface/datasets/issues/4566/events | https://github.com/huggingface/datasets/issues/4566 | 1,284,397,594 | I_kwDODunzps5Mjloa | 4,566 | Document link #load_dataset_enhancing_performance points to nowhere | {
"login": "subercui",
"id": 11674033,
"node_id": "MDQ6VXNlcjExNjc0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/11674033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subercui",
"html_url": "https://github.com/subercui",
"followers_url": "https://api.github.com/users/subercui/followers",
"following_url": "https://api.github.com/users/subercui/following{/other_user}",
"gists_url": "https://api.github.com/users/subercui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subercui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subercui/subscriptions",
"organizations_url": "https://api.github.com/users/subercui/orgs",
"repos_url": "https://api.github.com/users/subercui/repos",
"events_url": "https://api.github.com/users/subercui/events{/privacy}",
"received_events_url": "https://api.github.com/users/subercui/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | [
"Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?",
"https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works."
] | 1,656,119,899,000 | 1,674,578,020,000 | 1,674,578,020,000 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
![image](https://user-images.githubusercontent.com/11674033/175752806-5b066b92-9d28-4771-9112-5c8606f07741.png)
The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dataset_enhancing_performance) link [here](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https://huggingface.co/docs/datasets/v2.3.2/en/cache#improve-performance?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4566/timeline | null | completed | null | null | false |