autotrain-data-processor commited on
Commit
a6fb050
·
1 Parent(s): 3c75d04

Processed data from AutoTrain data processor ([2023-07-26 00:54 ]

Browse files
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ task_categories:
5
+ - summarization
6
+
7
+ ---
8
+ # AutoTrain Dataset for project: lwf-summarization
9
+
10
+ ## Dataset Description
11
+
12
+ This dataset has been automatically processed by AutoTrain for project lwf-summarization.
13
+
14
+ ### Languages
15
+
16
+ The BCP-47 code for the dataset's language is zh.
17
+
18
+ ## Dataset Structure
19
+
20
+ ### Data Instances
21
+
22
+ A sample from this dataset looks as follows:
23
+
24
+ ```json
25
+ [
26
+ {
27
+ "feat_id": "13716782",
28
+ "target": "The scariest place for Jessica was the Capuchin Catacombs in Palermo.",
29
+ "text": "Kelly: Oh! Oh! Can I pick the first question?\r\nJessica: Sure. Go for it!\r\nKelly: What's the scariest place you've been to!\r\nJessica: I'll start: Palermo in Italy.\r\nMickey: And what's so scary about that? Did you break your nail? :P\r\nJessica: Shut it, Mickey! No, there are the Capuchin Catacombs with 8000 corpses! \r\nKelly: Ewwww! Corpses? Rly?\r\nJessica: Yeah! And you can look at them like museum exhibits. I think they're divided somehow, but have no clue how!\r\nOllie: That's so cool! Do you get to see the bones or are they covered up?\r\nJessica: Well, partly. Most of them were exhibited in their clothes. Basically only skulls and hands. \r\nMickey: I'm writing this one down! That's so precious!\r\nOllie: Me too!"
30
+ },
31
+ {
32
+ "feat_id": "13716592",
33
+ "target": "Carrie and Gina saw \"Fantastic Beast\" and liked it. Ginna loved Eddie Redmayne as Newt. ",
34
+ "text": "Carrie: Just back from Fantastic Beast :)\r\nGina: and what do you think?\r\nCarrie: generally good - as usual nice special effect and visuals, an ok plot, a glimpse of the wizarding community in the US.\r\nAlex: Sounds cool. I was thinking of going this weekend with Lane, but I've seen some bad reviews.\r\nCarrie: Depends on what you expect really - I have a lot of sentiment towards Harry Potter so, I'm gonna like everything the do. But seriously the movie was decent. However, if you're expecting to have your mind blown, then no, it's not THAT good.\r\nGina: I agree. I saw it last week and basically I'm satisfied.\r\nAlex: No spoilers, girls.\r\nCarrie: no worries ;)\r\nCarrie: And Gina, what do you think about Eddie Redmayne as Newt?\r\nGina: I loved him <3 I loved how introverted and awkward he was and how caring he was towards the animals. And with all that he showed a lot of confidence in his beliefs and was a genuinely compassionate character\r\nCarrie: not your standard protagonist, that's for sure\r\nGina: and that's what I liked about him\r\nAlex: Maybe I'll go and see it sooner so we can all talk about it.\r\nCarrie: go see it. If' you're not expecting god-knows-what you're going to enjoy it ;)"
35
+ }
36
+ ]
37
+ ```
38
+
39
+ ### Dataset Fields
40
+
41
+ The dataset has the following fields (also called "features"):
42
+
43
+ ```json
44
+ {
45
+ "feat_id": "Value(dtype='string', id=None)",
46
+ "target": "Value(dtype='string', id=None)",
47
+ "text": "Value(dtype='string', id=None)"
48
+ }
49
+ ```
50
+
51
+ ### Dataset Splits
52
+
53
+ This dataset is split into a train and validation split. The split sizes are as follow:
54
+
55
+ | Split name | Num samples |
56
+ | ------------ | ------------------- |
57
+ | train | 655 |
58
+ | valid | 164 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fec66d9a2841da75e459f18c96842269d8524261ae9893992f74401dba0b546
3
+ size 429560
processed/train/dataset_info.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "feat_id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "text": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": "",
20
+ "splits": {
21
+ "train": {
22
+ "name": "train",
23
+ "num_bytes": 428804,
24
+ "num_examples": 655,
25
+ "dataset_name": null
26
+ }
27
+ }
28
+ }
processed/train/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "55e49768f323208c",
8
+ "_format_columns": [
9
+ "feat_id",
10
+ "target",
11
+ "text"
12
+ ],
13
+ "_format_kwargs": {},
14
+ "_format_type": null,
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }
processed/valid/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:682a9e994294869cf308b35dda09f15c9a3ed9b3ea4e59077622da46e5e61ddc
3
+ size 106448
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "feat_id": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "target": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "text": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ }
17
+ },
18
+ "homepage": "",
19
+ "license": "",
20
+ "splits": {
21
+ "valid": {
22
+ "name": "valid",
23
+ "num_bytes": 105676,
24
+ "num_examples": 164,
25
+ "dataset_name": null
26
+ }
27
+ }
28
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "21056fc221ab4c77",
8
+ "_format_columns": [
9
+ "feat_id",
10
+ "target",
11
+ "text"
12
+ ],
13
+ "_format_kwargs": {},
14
+ "_format_type": null,
15
+ "_output_all_columns": false,
16
+ "_split": null
17
+ }