Datasets:
Commit
•
f602910
0
Parent(s):
Update files from the datasets library (from 1.8.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.8.0
- .gitattributes +27 -0
- README.md +311 -0
- code_x_glue_ct_code_to_text.py +155 -0
- common.py +75 -0
- dataset_infos.json +1 -0
- dummy/go/0.0.0/dummy_data.zip +3 -0
- dummy/java/0.0.0/dummy_data.zip +3 -0
- dummy/javascript/0.0.0/dummy_data.zip +3 -0
- dummy/php/0.0.0/dummy_data.zip +3 -0
- dummy/python/0.0.0/dummy_data.zip +3 -0
- dummy/ruby/0.0.0/dummy_data.zip +3 -0
- generated_definitions.py +68 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,311 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- found
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- code
|
8 |
+
- en
|
9 |
+
licenses:
|
10 |
+
- other-C-UDA
|
11 |
+
multilinguality:
|
12 |
+
- other-programming-languages
|
13 |
+
size_categories:
|
14 |
+
go:
|
15 |
+
- 100K<n<1M
|
16 |
+
java:
|
17 |
+
- 100K<n<1M
|
18 |
+
javascript:
|
19 |
+
- 10K<n<100K
|
20 |
+
php:
|
21 |
+
- 100K<n<1M
|
22 |
+
python:
|
23 |
+
- 100K<n<1M
|
24 |
+
ruby:
|
25 |
+
- 10K<n<100K
|
26 |
+
source_datasets:
|
27 |
+
- original
|
28 |
+
task_categories:
|
29 |
+
- conditional-text-generation
|
30 |
+
task_ids:
|
31 |
+
- machine-translation
|
32 |
+
---
|
33 |
+
# Dataset Card for "code_x_glue_ct_code_to_text"
|
34 |
+
|
35 |
+
## Table of Contents
|
36 |
+
- [Dataset Description](#dataset-description)
|
37 |
+
- [Dataset Summary](#dataset-summary)
|
38 |
+
- [Supported Tasks and Leaderboards](#supported-tasks)
|
39 |
+
- [Languages](#languages)
|
40 |
+
- [Dataset Structure](#dataset-structure)
|
41 |
+
- [Data Instances](#data-instances)
|
42 |
+
- [Data Fields](#data-fields)
|
43 |
+
- [Data Splits](#data-splits-sample-size)
|
44 |
+
- [Dataset Creation](#dataset-creation)
|
45 |
+
- [Curation Rationale](#curation-rationale)
|
46 |
+
- [Source Data](#source-data)
|
47 |
+
- [Annotations](#annotations)
|
48 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
49 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
50 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
51 |
+
- [Discussion of Biases](#discussion-of-biases)
|
52 |
+
- [Other Known Limitations](#other-known-limitations)
|
53 |
+
- [Additional Information](#additional-information)
|
54 |
+
- [Dataset Curators](#dataset-curators)
|
55 |
+
- [Licensing Information](#licensing-information)
|
56 |
+
- [Citation Information](#citation-information)
|
57 |
+
- [Contributions](#contributions)
|
58 |
+
|
59 |
+
## Dataset Description
|
60 |
+
|
61 |
+
- **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text
|
62 |
+
|
63 |
+
### Dataset Summary
|
64 |
+
|
65 |
+
CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text
|
66 |
+
|
67 |
+
The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
|
68 |
+
- Remove examples that codes cannot be parsed into an abstract syntax tree.
|
69 |
+
- Remove examples that #tokens of documents is < 3 or >256
|
70 |
+
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
|
71 |
+
- Remove examples that documents are not English.
|
72 |
+
|
73 |
+
### Supported Tasks and Leaderboards
|
74 |
+
|
75 |
+
- `machine-translation`: The dataset can be used to train a model for automatically generating **English** docstrings for code.
|
76 |
+
|
77 |
+
### Languages
|
78 |
+
|
79 |
+
- Go **programming** language
|
80 |
+
- Java **programming** language
|
81 |
+
- Javascript **programming** language
|
82 |
+
- PHP **programming** language
|
83 |
+
- Python **programming** language
|
84 |
+
- Ruby **programming** language
|
85 |
+
- English **natural** language
|
86 |
+
|
87 |
+
## Dataset Structure
|
88 |
+
|
89 |
+
### Data Instances
|
90 |
+
|
91 |
+
#### go
|
92 |
+
|
93 |
+
An example of 'test' looks as follows.
|
94 |
+
```
|
95 |
+
{
|
96 |
+
"code": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}",
|
97 |
+
"code_tokens": ["func", "NewSTM", "(", "c", "*", "v3", ".", "Client", ",", "apply", "func", "(", "STM", ")", "error", ",", "so", "...", "stmOption", ")", "(", "*", "v3", ".", "TxnResponse", ",", "error", ")", "{", "opts", ":=", "&", "stmOptions", "{", "ctx", ":", "c", ".", "Ctx", "(", ")", "}", "\n", "for", "_", ",", "f", ":=", "range", "so", "{", "f", "(", "opts", ")", "\n", "}", "\n", "if", "len", "(", "opts", ".", "prefetch", ")", "!=", "0", "{", "f", ":=", "apply", "\n", "apply", "=", "func", "(", "s", "STM", ")", "error", "{", "s", ".", "Get", "(", "opts", ".", "prefetch", "...", ")", "\n", "return", "f", "(", "s", ")", "\n", "}", "\n", "}", "\n", "return", "runSTM", "(", "mkSTM", "(", "c", ",", "opts", ")", ",", "apply", ")", "\n", "}"],
|
98 |
+
"docstring": "// NewSTM initiates a new STM instance, using serializable snapshot isolation by default.",
|
99 |
+
"docstring_tokens": ["NewSTM", "initiates", "a", "new", "STM", "instance", "using", "serializable", "snapshot", "isolation", "by", "default", "."],
|
100 |
+
"func_name": "NewSTM",
|
101 |
+
"id": 0,
|
102 |
+
"language": "go",
|
103 |
+
"original_string": "func NewSTM(c *v3.Client, apply func(STM) error, so ...stmOption) (*v3.TxnResponse, error) {\n\topts := &stmOptions{ctx: c.Ctx()}\n\tfor _, f := range so {\n\t\tf(opts)\n\t}\n\tif len(opts.prefetch) != 0 {\n\t\tf := apply\n\t\tapply = func(s STM) error {\n\t\t\ts.Get(opts.prefetch...)\n\t\t\treturn f(s)\n\t\t}\n\t}\n\treturn runSTM(mkSTM(c, opts), apply)\n}",
|
104 |
+
"path": "clientv3/concurrency/stm.go",
|
105 |
+
"repo": "etcd-io/etcd",
|
106 |
+
"sha": "616592d9ba993e3fe9798eef581316016df98906",
|
107 |
+
"url": "https://github.com/etcd-io/etcd/blob/616592d9ba993e3fe9798eef581316016df98906/clientv3/concurrency/stm.go#L89-L102"
|
108 |
+
}
|
109 |
+
```
|
110 |
+
|
111 |
+
#### java
|
112 |
+
|
113 |
+
An example of 'test' looks as follows.
|
114 |
+
```
|
115 |
+
{
|
116 |
+
"code": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }",
|
117 |
+
"code_tokens": ["protected", "final", "void", "fastPathOrderedEmit", "(", "U", "value", ",", "boolean", "delayError", ",", "Disposable", "disposable", ")", "{", "final", "Observer", "<", "?", "super", "V", ">", "observer", "=", "downstream", ";", "final", "SimplePlainQueue", "<", "U", ">", "q", "=", "queue", ";", "if", "(", "wip", ".", "get", "(", ")", "==", "0", "&&", "wip", ".", "compareAndSet", "(", "0", ",", "1", ")", ")", "{", "if", "(", "q", ".", "isEmpty", "(", ")", ")", "{", "accept", "(", "observer", ",", "value", ")", ";", "if", "(", "leave", "(", "-", "1", ")", "==", "0", ")", "{", "return", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "}", "}", "else", "{", "q", ".", "offer", "(", "value", ")", ";", "if", "(", "!", "enter", "(", ")", ")", "{", "return", ";", "}", "}", "QueueDrainHelper", ".", "drainLoop", "(", "q", ",", "observer", ",", "delayError", ",", "disposable", ",", "this", ")", ";", "}"],
|
118 |
+
"docstring": "Makes sure the fast-path emits in order.\n@param value the value to emit or queue up\n@param delayError if true, errors are delayed until the source has terminated\n@param disposable the resource to dispose if the drain terminates",
|
119 |
+
"docstring_tokens": ["Makes", "sure", "the", "fast", "-", "path", "emits", "in", "order", "."],
|
120 |
+
"func_name": "QueueDrainObserver.fastPathOrderedEmit",
|
121 |
+
"id": 0,
|
122 |
+
"language": "java",
|
123 |
+
"original_string": "protected final void fastPathOrderedEmit(U value, boolean delayError, Disposable disposable) {\n final Observer<? super V> observer = downstream;\n final SimplePlainQueue<U> q = queue;\n\n if (wip.get() == 0 && wip.compareAndSet(0, 1)) {\n if (q.isEmpty()) {\n accept(observer, value);\n if (leave(-1) == 0) {\n return;\n }\n } else {\n q.offer(value);\n }\n } else {\n q.offer(value);\n if (!enter()) {\n return;\n }\n }\n QueueDrainHelper.drainLoop(q, observer, delayError, disposable, this);\n }",
|
124 |
+
"path": "src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java",
|
125 |
+
"repo": "ReactiveX/RxJava",
|
126 |
+
"sha": "ac84182aa2bd866b53e01c8e3fe99683b882c60e",
|
127 |
+
"url": "https://github.com/ReactiveX/RxJava/blob/ac84182aa2bd866b53e01c8e3fe99683b882c60e/src/main/java/io/reactivex/internal/observers/QueueDrainObserver.java#L88-L108"
|
128 |
+
}
|
129 |
+
```
|
130 |
+
|
131 |
+
#### javascript
|
132 |
+
|
133 |
+
An example of 'test' looks as follows.
|
134 |
+
```
|
135 |
+
{
|
136 |
+
"code": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}",
|
137 |
+
"code_tokens": ["function", "createInstance", "(", "defaultConfig", ")", "{", "var", "context", "=", "new", "Axios", "(", "defaultConfig", ")", ";", "var", "instance", "=", "bind", "(", "Axios", ".", "prototype", ".", "request", ",", "context", ")", ";", "// Copy axios.prototype to instance", "utils", ".", "extend", "(", "instance", ",", "Axios", ".", "prototype", ",", "context", ")", ";", "// Copy context to instance", "utils", ".", "extend", "(", "instance", ",", "context", ")", ";", "return", "instance", ";", "}"],
|
138 |
+
"docstring": "Create an instance of Axios\n\n@param {Object} defaultConfig The default config for the instance\n@return {Axios} A new instance of Axios",
|
139 |
+
"docstring_tokens": ["Create", "an", "instance", "of", "Axios"],
|
140 |
+
"func_name": "createInstance",
|
141 |
+
"id": 0,
|
142 |
+
"language": "javascript",
|
143 |
+
"original_string": "function createInstance(defaultConfig) {\n var context = new Axios(defaultConfig);\n var instance = bind(Axios.prototype.request, context);\n\n // Copy axios.prototype to instance\n utils.extend(instance, Axios.prototype, context);\n\n // Copy context to instance\n utils.extend(instance, context);\n\n return instance;\n}",
|
144 |
+
"path": "lib/axios.js",
|
145 |
+
"repo": "axios/axios",
|
146 |
+
"sha": "92d231387fe2092f8736bc1746d4caa766b675f5",
|
147 |
+
"url": "https://github.com/axios/axios/blob/92d231387fe2092f8736bc1746d4caa766b675f5/lib/axios.js#L15-L26"
|
148 |
+
}
|
149 |
+
```
|
150 |
+
|
151 |
+
#### php
|
152 |
+
|
153 |
+
An example of 'train' looks as follows.
|
154 |
+
```
|
155 |
+
{
|
156 |
+
"code": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }",
|
157 |
+
"code_tokens": ["public", "static", "function", "build", "(", "$", "serviceAddress", ",", "$", "restConfigPath", ",", "array", "$", "config", "=", "[", "]", ")", "{", "$", "config", "+=", "[", "'httpHandler'", "=>", "null", ",", "]", ";", "list", "(", "$", "baseUri", ",", "$", "port", ")", "=", "self", "::", "normalizeServiceAddress", "(", "$", "serviceAddress", ")", ";", "$", "requestBuilder", "=", "new", "RequestBuilder", "(", "\"$baseUri:$port\"", ",", "$", "restConfigPath", ")", ";", "$", "httpHandler", "=", "$", "config", "[", "'httpHandler'", "]", "?", ":", "self", "::", "buildHttpHandlerAsync", "(", ")", ";", "return", "new", "RestTransport", "(", "$", "requestBuilder", ",", "$", "httpHandler", ")", ";", "}"],
|
158 |
+
"docstring": "Builds a RestTransport.\n\n@param string $serviceAddress\nThe address of the API remote host, for example \"example.googleapis.com\".\n@param string $restConfigPath\nPath to rest config file.\n@param array $config {\nConfig options used to construct the gRPC transport.\n\n@type callable $httpHandler A handler used to deliver PSR-7 requests.\n}\n@return RestTransport\n@throws ValidationException",
|
159 |
+
"docstring_tokens": ["Builds", "a", "RestTransport", "."],
|
160 |
+
"func_name": "RestTransport.build",
|
161 |
+
"id": 0,
|
162 |
+
"language": "php",
|
163 |
+
"original_string": "public static function build($serviceAddress, $restConfigPath, array $config = [])\n {\n $config += [\n 'httpHandler' => null,\n ];\n list($baseUri, $port) = self::normalizeServiceAddress($serviceAddress);\n $requestBuilder = new RequestBuilder(\"$baseUri:$port\", $restConfigPath);\n $httpHandler = $config['httpHandler'] ?: self::buildHttpHandlerAsync();\n return new RestTransport($requestBuilder, $httpHandler);\n }",
|
164 |
+
"path": "src/Transport/RestTransport.php",
|
165 |
+
"repo": "googleapis/gax-php",
|
166 |
+
"sha": "48387fb818c6882296710a2302a0aa973b99afb2",
|
167 |
+
"url": "https://github.com/googleapis/gax-php/blob/48387fb818c6882296710a2302a0aa973b99afb2/src/Transport/RestTransport.php#L85-L94"
|
168 |
+
}
|
169 |
+
```
|
170 |
+
|
171 |
+
#### python
|
172 |
+
|
173 |
+
An example of 'validation' looks as follows.
|
174 |
+
```
|
175 |
+
{
|
176 |
+
"code": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)",
|
177 |
+
"code_tokens": ["def", "save_act", "(", "self", ",", "path", "=", "None", ")", ":", "if", "path", "is", "None", ":", "path", "=", "os", ".", "path", ".", "join", "(", "logger", ".", "get_dir", "(", ")", ",", "\"model.pkl\"", ")", "with", "tempfile", ".", "TemporaryDirectory", "(", ")", "as", "td", ":", "save_variables", "(", "os", ".", "path", ".", "join", "(", "td", ",", "\"model\"", ")", ")", "arc_name", "=", "os", ".", "path", ".", "join", "(", "td", ",", "\"packed.zip\"", ")", "with", "zipfile", ".", "ZipFile", "(", "arc_name", ",", "'w'", ")", "as", "zipf", ":", "for", "root", ",", "dirs", ",", "files", "in", "os", ".", "walk", "(", "td", ")", ":", "for", "fname", "in", "files", ":", "file_path", "=", "os", ".", "path", ".", "join", "(", "root", ",", "fname", ")", "if", "file_path", "!=", "arc_name", ":", "zipf", ".", "write", "(", "file_path", ",", "os", ".", "path", ".", "relpath", "(", "file_path", ",", "td", ")", ")", "with", "open", "(", "arc_name", ",", "\"rb\"", ")", "as", "f", ":", "model_data", "=", "f", ".", "read", "(", ")", "with", "open", "(", "path", ",", "\"wb\"", ")", "as", "f", ":", "cloudpickle", ".", "dump", "(", "(", "model_data", ",", "self", ".", "_act_params", ")", ",", "f", ")"],
|
178 |
+
"docstring": "Save model to a pickle located at `path`",
|
179 |
+
"docstring_tokens": ["Save", "model", "to", "a", "pickle", "located", "at", "path"],
|
180 |
+
"func_name": "ActWrapper.save_act",
|
181 |
+
"id": 0,
|
182 |
+
"language": "python",
|
183 |
+
"original_string": "def save_act(self, path=None):\n \"\"\"Save model to a pickle located at `path`\"\"\"\n if path is None:\n path = os.path.join(logger.get_dir(), \"model.pkl\")\n\n with tempfile.TemporaryDirectory() as td:\n save_variables(os.path.join(td, \"model\"))\n arc_name = os.path.join(td, \"packed.zip\")\n with zipfile.ZipFile(arc_name, 'w') as zipf:\n for root, dirs, files in os.walk(td):\n for fname in files:\n file_path = os.path.join(root, fname)\n if file_path != arc_name:\n zipf.write(file_path, os.path.relpath(file_path, td))\n with open(arc_name, \"rb\") as f:\n model_data = f.read()\n with open(path, \"wb\") as f:\n cloudpickle.dump((model_data, self._act_params), f)",
|
184 |
+
"path": "baselines/deepq/deepq.py",
|
185 |
+
"repo": "openai/baselines",
|
186 |
+
"sha": "3301089b48c42b87b396e246ea3f56fa4bfc9678",
|
187 |
+
"url": "https://github.com/openai/baselines/blob/3301089b48c42b87b396e246ea3f56fa4bfc9678/baselines/deepq/deepq.py#L55-L72"
|
188 |
+
}
|
189 |
+
```
|
190 |
+
|
191 |
+
#### ruby
|
192 |
+
|
193 |
+
An example of 'train' looks as follows.
|
194 |
+
```
|
195 |
+
{
|
196 |
+
"code": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end",
|
197 |
+
"code_tokens": ["def", "render_body", "(", "context", ",", "options", ")", "if", "options", ".", "key?", "(", ":partial", ")", "[", "render_partial", "(", "context", ",", "options", ")", "]", "else", "StreamingTemplateRenderer", ".", "new", "(", "@lookup_context", ")", ".", "render", "(", "context", ",", "options", ")", "end", "end"],
|
198 |
+
"docstring": "Render but returns a valid Rack body. If fibers are defined, we return\n a streaming body that renders the template piece by piece.\n\n Note that partials are not supported to be rendered with streaming,\n so in such cases, we just wrap them in an array.",
|
199 |
+
"docstring_tokens": ["Render", "but", "returns", "a", "valid", "Rack", "body", ".", "If", "fibers", "are", "defined", "we", "return", "a", "streaming", "body", "that", "renders", "the", "template", "piece", "by", "piece", "."],
|
200 |
+
"func_name": "ActionView.Renderer.render_body",
|
201 |
+
"id": 0,
|
202 |
+
"language": "ruby",
|
203 |
+
"original_string": "def render_body(context, options)\n if options.key?(:partial)\n [render_partial(context, options)]\n else\n StreamingTemplateRenderer.new(@lookup_context).render(context, options)\n end\n end",
|
204 |
+
"path": "actionview/lib/action_view/renderer/renderer.rb",
|
205 |
+
"repo": "rails/rails",
|
206 |
+
"sha": "85a8bc644be69908f05740a5886ec19cd3679df5",
|
207 |
+
"url": "https://github.com/rails/rails/blob/85a8bc644be69908f05740a5886ec19cd3679df5/actionview/lib/action_view/renderer/renderer.rb#L38-L44"
|
208 |
+
}
|
209 |
+
```
|
210 |
+
|
211 |
+
### Data Fields
|
212 |
+
|
213 |
+
In the following each data field in go is explained for each config. The data fields are the same among all splits.
|
214 |
+
|
215 |
+
#### go, java, javascript, php, python, ruby
|
216 |
+
|
217 |
+
| field name | type | description |
|
218 |
+
|----------------|----------------|-----------------------------------------------------------------------------------|
|
219 |
+
|id |int32 | Index of the sample |
|
220 |
+
|repo |string | repo: the owner/repo |
|
221 |
+
|path |string | path: the full path to the original file |
|
222 |
+
|func_name |string | func_name: the function or method name |
|
223 |
+
|original_string |string | original_string: the raw string before tokenization or parsing |
|
224 |
+
|language |string | language: the programming language name |
|
225 |
+
|code |string | code/function: the part of the original_string that is code |
|
226 |
+
|code_tokens |Sequence[string]| code_tokens/function_tokens: tokenized version of code |
|
227 |
+
|docstring |string | docstring: the top-level comment or docstring, if it exists in the original string|
|
228 |
+
|docstring_tokens|Sequence[string]| docstring_tokens: tokenized version of docstring |
|
229 |
+
|sha |string | sha of the file |
|
230 |
+
|url |string | url of the file |
|
231 |
+
|
232 |
+
### Data Splits
|
233 |
+
|
234 |
+
| name |train |validation|test |
|
235 |
+
|----------|-----:|---------:|----:|
|
236 |
+
|go |167288| 7325| 8122|
|
237 |
+
|java |164923| 5183|10955|
|
238 |
+
|javascript| 58025| 3885| 3291|
|
239 |
+
|php |241241| 12982|14014|
|
240 |
+
|python |251820| 13914|14918|
|
241 |
+
|ruby | 24927| 1400| 1261|
|
242 |
+
|
243 |
+
## Dataset Creation
|
244 |
+
|
245 |
+
### Curation Rationale
|
246 |
+
|
247 |
+
[More Information Needed]
|
248 |
+
|
249 |
+
### Source Data
|
250 |
+
|
251 |
+
#### Initial Data Collection and Normalization
|
252 |
+
|
253 |
+
Data from CodeSearchNet Challenge dataset.
|
254 |
+
[More Information Needed]
|
255 |
+
|
256 |
+
#### Who are the source language producers?
|
257 |
+
|
258 |
+
Software Engineering developers.
|
259 |
+
|
260 |
+
### Annotations
|
261 |
+
|
262 |
+
#### Annotation process
|
263 |
+
|
264 |
+
[More Information Needed]
|
265 |
+
|
266 |
+
#### Who are the annotators?
|
267 |
+
|
268 |
+
[More Information Needed]
|
269 |
+
|
270 |
+
### Personal and Sensitive Information
|
271 |
+
|
272 |
+
[More Information Needed]
|
273 |
+
|
274 |
+
## Considerations for Using the Data
|
275 |
+
|
276 |
+
### Social Impact of Dataset
|
277 |
+
|
278 |
+
[More Information Needed]
|
279 |
+
|
280 |
+
### Discussion of Biases
|
281 |
+
|
282 |
+
[More Information Needed]
|
283 |
+
|
284 |
+
### Other Known Limitations
|
285 |
+
|
286 |
+
[More Information Needed]
|
287 |
+
|
288 |
+
## Additional Information
|
289 |
+
|
290 |
+
### Dataset Curators
|
291 |
+
|
292 |
+
https://github.com/microsoft, https://github.com/madlag
|
293 |
+
|
294 |
+
### Licensing Information
|
295 |
+
|
296 |
+
Computational Use of Data Agreement (C-UDA) License.
|
297 |
+
|
298 |
+
### Citation Information
|
299 |
+
|
300 |
+
```
|
301 |
+
@article{husain2019codesearchnet,
|
302 |
+
title={Codesearchnet challenge: Evaluating the state of semantic code search},
|
303 |
+
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
|
304 |
+
journal={arXiv preprint arXiv:1909.09436},
|
305 |
+
year={2019}
|
306 |
+
}
|
307 |
+
```
|
308 |
+
|
309 |
+
### Contributions
|
310 |
+
|
311 |
+
Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
|
code_x_glue_ct_code_to_text.py
ADDED
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
import os
|
3 |
+
import os.path
|
4 |
+
from typing import List
|
5 |
+
|
6 |
+
import datasets
|
7 |
+
|
8 |
+
from .common import TrainValidTestChild
|
9 |
+
from .generated_definitions import DEFINITIONS
|
10 |
+
|
11 |
+
|
12 |
+
_DESCRIPTION = """The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
|
13 |
+
- Remove examples that codes cannot be parsed into an abstract syntax tree.
|
14 |
+
- Remove examples that #tokens of documents is < 3 or >256
|
15 |
+
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
|
16 |
+
- Remove examples that documents are not English.
|
17 |
+
"""
|
18 |
+
_CITATION = """@article{husain2019codesearchnet,
|
19 |
+
title={Codesearchnet challenge: Evaluating the state of semantic code search},
|
20 |
+
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
|
21 |
+
journal={arXiv preprint arXiv:1909.09436},
|
22 |
+
year={2019}
|
23 |
+
}"""
|
24 |
+
|
25 |
+
|
26 |
+
class CodeXGlueCtCodeToTextBaseImpl(TrainValidTestChild):
|
27 |
+
_DESCRIPTION = _DESCRIPTION
|
28 |
+
_CITATION = _CITATION
|
29 |
+
|
30 |
+
# For each file, each line in the uncompressed file represents one function.
|
31 |
+
_FEATURES = {
|
32 |
+
"id": datasets.Value("int32"), # Index of the sample
|
33 |
+
"repo": datasets.Value("string"), # repo: the owner/repo
|
34 |
+
"path": datasets.Value("string"), # path: the full path to the original file
|
35 |
+
"func_name": datasets.Value("string"), # func_name: the function or method name
|
36 |
+
"original_string": datasets.Value("string"), # original_string: the raw string before tokenization or parsing
|
37 |
+
"language": datasets.Value("string"), # language: the programming language name
|
38 |
+
"code": datasets.Value("string"), # code/function: the part of the original_string that is code
|
39 |
+
"code_tokens": datasets.features.Sequence(
|
40 |
+
datasets.Value("string")
|
41 |
+
), # code_tokens/function_tokens: tokenized version of code
|
42 |
+
"docstring": datasets.Value(
|
43 |
+
"string"
|
44 |
+
), # docstring: the top-level comment or docstring, if it exists in the original string
|
45 |
+
"docstring_tokens": datasets.features.Sequence(
|
46 |
+
datasets.Value("string")
|
47 |
+
), # docstring_tokens: tokenized version of docstring
|
48 |
+
"sha": datasets.Value("string"), # sha of the file
|
49 |
+
"url": datasets.Value("string"), # url of the file
|
50 |
+
}
|
51 |
+
|
52 |
+
_SUPERVISED_KEYS = ["docstring", "docstring_tokens"]
|
53 |
+
|
54 |
+
def generate_urls(self, split_name, language):
|
55 |
+
yield "language", f"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/{language}.zip"
|
56 |
+
yield "dataset", "dataset.zip"
|
57 |
+
|
58 |
+
def get_data_files(self, split_name, file_paths, language):
|
59 |
+
language_specific_path = file_paths["language"]
|
60 |
+
final_path = os.path.join(language_specific_path, language, "final")
|
61 |
+
# Make some cleanup to save space
|
62 |
+
for path in os.listdir(final_path):
|
63 |
+
if path.endswith(".pkl"):
|
64 |
+
os.unlink(path)
|
65 |
+
|
66 |
+
data_files = []
|
67 |
+
for root, dirs, files in os.walk(final_path):
|
68 |
+
for file in files:
|
69 |
+
temp = os.path.join(root, file)
|
70 |
+
if ".jsonl" in temp:
|
71 |
+
if split_name in temp:
|
72 |
+
data_files.append(temp)
|
73 |
+
return data_files
|
74 |
+
|
75 |
+
def post_process(self, split_name, language, js):
|
76 |
+
return js
|
77 |
+
|
78 |
+
def _generate_examples(self, split_name, file_paths, language):
|
79 |
+
import gzip
|
80 |
+
|
81 |
+
data_set_path = file_paths["dataset"]
|
82 |
+
|
83 |
+
data_files = self.get_data_files(split_name, file_paths, language)
|
84 |
+
|
85 |
+
urls = {}
|
86 |
+
f1_path_parts = [data_set_path, "dataset", language, f"{split_name}.txt"]
|
87 |
+
if self.SINGLE_LANGUAGE:
|
88 |
+
del f1_path_parts[2]
|
89 |
+
|
90 |
+
f1_path = os.path.join(*f1_path_parts)
|
91 |
+
with open(f1_path, encoding="utf-8") as f1:
|
92 |
+
for line in f1:
|
93 |
+
line = line.strip()
|
94 |
+
urls[line] = True
|
95 |
+
|
96 |
+
idx = 0
|
97 |
+
for file in data_files:
|
98 |
+
if ".gz" in file:
|
99 |
+
f = gzip.open(file)
|
100 |
+
else:
|
101 |
+
f = open(file, encoding="utf-8")
|
102 |
+
|
103 |
+
for line in f:
|
104 |
+
line = line.strip()
|
105 |
+
js = json.loads(line)
|
106 |
+
if js["url"] in urls:
|
107 |
+
js["id"] = idx
|
108 |
+
js = self.post_process(split_name, language, js)
|
109 |
+
if "partition" in js:
|
110 |
+
del js["partition"]
|
111 |
+
yield idx, js
|
112 |
+
idx += 1
|
113 |
+
f.close()
|
114 |
+
|
115 |
+
|
116 |
+
class CodeXGlueCtCodeToTextImpl(CodeXGlueCtCodeToTextBaseImpl):
|
117 |
+
SINGLE_LANGUAGE = False
|
118 |
+
|
119 |
+
def generate_urls(self, split_name):
|
120 |
+
language = self.info["parameters"]["language"]
|
121 |
+
for e in super().generate_urls(split_name, language):
|
122 |
+
yield e
|
123 |
+
|
124 |
+
def _generate_examples(self, split_name, file_paths):
|
125 |
+
language = self.info["parameters"]["language"]
|
126 |
+
for e in super()._generate_examples(split_name, file_paths, language):
|
127 |
+
yield e
|
128 |
+
|
129 |
+
|
130 |
+
CLASS_MAPPING = {
|
131 |
+
"CodeXGlueCtCodeToText": CodeXGlueCtCodeToTextImpl,
|
132 |
+
}
|
133 |
+
|
134 |
+
|
135 |
+
class CodeXGlueCtCodeToText(datasets.GeneratorBasedBuilder):
|
136 |
+
BUILDER_CONFIG_CLASS = datasets.BuilderConfig
|
137 |
+
BUILDER_CONFIGS = [
|
138 |
+
datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
|
139 |
+
]
|
140 |
+
|
141 |
+
def _info(self):
|
142 |
+
name = self.config.name
|
143 |
+
info = DEFINITIONS[name]
|
144 |
+
if info["class_name"] in CLASS_MAPPING:
|
145 |
+
self.child = CLASS_MAPPING[info["class_name"]](info)
|
146 |
+
else:
|
147 |
+
raise RuntimeError(f"Unknown python class for dataset configuration {name}")
|
148 |
+
ret = self.child._info()
|
149 |
+
return ret
|
150 |
+
|
151 |
+
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
|
152 |
+
return self.child._split_generators(dl_manager=dl_manager)
|
153 |
+
|
154 |
+
def _generate_examples(self, split_name, file_paths):
|
155 |
+
return self.child._generate_examples(split_name, file_paths)
|
common.py
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import List
|
2 |
+
|
3 |
+
import datasets
|
4 |
+
|
5 |
+
|
6 |
+
# Citation, taken from https://github.com/microsoft/CodeXGLUE
|
7 |
+
_DEFAULT_CITATION = """@article{CodeXGLUE,
|
8 |
+
title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
|
9 |
+
year={2020},}"""
|
10 |
+
|
11 |
+
|
12 |
+
class Child:
|
13 |
+
_DESCRIPTION = None
|
14 |
+
_FEATURES = None
|
15 |
+
_CITATION = None
|
16 |
+
SPLITS = {"train": datasets.Split.TRAIN}
|
17 |
+
_SUPERVISED_KEYS = None
|
18 |
+
|
19 |
+
def __init__(self, info):
|
20 |
+
self.info = info
|
21 |
+
|
22 |
+
def homepage(self):
|
23 |
+
return self.info["project_url"]
|
24 |
+
|
25 |
+
def _info(self):
|
26 |
+
# This is the description that will appear on the datasets page.
|
27 |
+
return datasets.DatasetInfo(
|
28 |
+
description=self.info["description"] + "\n\n" + self._DESCRIPTION,
|
29 |
+
features=datasets.Features(self._FEATURES),
|
30 |
+
homepage=self.homepage(),
|
31 |
+
citation=self._CITATION or _DEFAULT_CITATION,
|
32 |
+
supervised_keys=self._SUPERVISED_KEYS,
|
33 |
+
)
|
34 |
+
|
35 |
+
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
|
36 |
+
SPLITS = self.SPLITS
|
37 |
+
_URL = self.info["raw_url"]
|
38 |
+
urls_to_download = {}
|
39 |
+
for split in SPLITS:
|
40 |
+
if split not in urls_to_download:
|
41 |
+
urls_to_download[split] = {}
|
42 |
+
|
43 |
+
for key, url in self.generate_urls(split):
|
44 |
+
if not url.startswith("http"):
|
45 |
+
url = _URL + "/" + url
|
46 |
+
urls_to_download[split][key] = url
|
47 |
+
|
48 |
+
downloaded_files = {}
|
49 |
+
for k, v in urls_to_download.items():
|
50 |
+
downloaded_files[k] = dl_manager.download_and_extract(v)
|
51 |
+
|
52 |
+
return [
|
53 |
+
datasets.SplitGenerator(
|
54 |
+
name=SPLITS[k],
|
55 |
+
gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
|
56 |
+
)
|
57 |
+
for k in SPLITS
|
58 |
+
]
|
59 |
+
|
60 |
+
def check_empty(self, entries):
|
61 |
+
all_empty = all([v == "" for v in entries.values()])
|
62 |
+
all_non_empty = all([v != "" for v in entries.values()])
|
63 |
+
|
64 |
+
if not all_non_empty and not all_empty:
|
65 |
+
raise RuntimeError("Parallel data files should have the same number of lines.")
|
66 |
+
|
67 |
+
return all_empty
|
68 |
+
|
69 |
+
|
70 |
+
class TrainValidTestChild(Child):
|
71 |
+
SPLITS = {
|
72 |
+
"train": datasets.Split.TRAIN,
|
73 |
+
"valid": datasets.Split.VALIDATION,
|
74 |
+
"test": datasets.Split.TEST,
|
75 |
+
}
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"go": {"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n- Remove examples that codes cannot be parsed into an abstract syntax tree.\n- Remove examples that #tokens of documents is < 3 or >256\n- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n- Remove examples that documents are not English.\n", "citation": "@article{husain2019codesearchnet,\ntitle={Codesearchnet challenge: Evaluating the state of semantic code search},\nauthor={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\njournal={arXiv preprint arXiv:1909.09436},\nyear={2019}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "repo": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "original_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "docstring": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "sha": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "docstring", "output": "docstring_tokens"}, "task_templates": null, "builder_name": "code_x_glue_ct_code_to_text", "config_name": "go", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 342244027, "num_examples": 167288, "dataset_name": "code_x_glue_ct_code_to_text"}, "validation": {"name": "validation", "num_bytes": 13721912, "num_examples": 7325, "dataset_name": "code_x_glue_ct_code_to_text"}, "test": {"name": "test", "num_bytes": 16328458, "num_examples": 8122, "dataset_name": "code_x_glue_ct_code_to_text"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/go.zip": {"num_bytes": 487525935, "checksum": "15d23f01dc2796447e1736263e6830079289d5ef41f09988011afdcf8da6b6e5"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text/dataset.zip": {"num_bytes": 12396864, "checksum": "31ec750805302ecd71b278a492d23d2ac916269f7ec645bba4f23b6f7c4bf217"}}, "download_size": 499922799, "post_processing_size": null, "dataset_size": 372294397, "size_in_bytes": 872217196}, "java": {"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n- Remove examples that codes cannot be parsed into an abstract syntax tree.\n- Remove examples that #tokens of documents is < 3 or >256\n- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n- Remove examples that documents are not English.\n", "citation": "@article{husain2019codesearchnet,\ntitle={Codesearchnet challenge: Evaluating the state of semantic code search},\nauthor={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\njournal={arXiv preprint arXiv:1909.09436},\nyear={2019}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "repo": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "original_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "docstring": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "sha": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "docstring", "output": "docstring_tokens"}, "task_templates": null, "builder_name": "code_x_glue_ct_code_to_text", "config_name": "java", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 452554719, "num_examples": 164923, "dataset_name": "code_x_glue_ct_code_to_text"}, "validation": {"name": "validation", "num_bytes": 13366396, "num_examples": 5183, "dataset_name": "code_x_glue_ct_code_to_text"}, "test": {"name": "test", "num_bytes": 29080857, "num_examples": 10955, "dataset_name": "code_x_glue_ct_code_to_text"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/java.zip": {"num_bytes": 1060569153, "checksum": "05f9204b1808413fab30f0e69229e298f6de4ad468279d53a2aa5797e3a78c17"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text/dataset.zip": {"num_bytes": 12396864, "checksum": "31ec750805302ecd71b278a492d23d2ac916269f7ec645bba4f23b6f7c4bf217"}}, "download_size": 1072966017, "post_processing_size": null, "dataset_size": 495001972, "size_in_bytes": 1567967989}, "javascript": {"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n- Remove examples that codes cannot be parsed into an abstract syntax tree.\n- Remove examples that #tokens of documents is < 3 or >256\n- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n- Remove examples that documents are not English.\n", "citation": "@article{husain2019codesearchnet,\ntitle={Codesearchnet challenge: Evaluating the state of semantic code search},\nauthor={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\njournal={arXiv preprint arXiv:1909.09436},\nyear={2019}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "repo": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "original_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "docstring": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "sha": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "docstring", "output": "docstring_tokens"}, "task_templates": null, "builder_name": "code_x_glue_ct_code_to_text", "config_name": "javascript", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 160860743, "num_examples": 58025, "dataset_name": "code_x_glue_ct_code_to_text"}, "validation": {"name": "validation", "num_bytes": 10337396, "num_examples": 3885, "dataset_name": "code_x_glue_ct_code_to_text"}, "test": {"name": "test", "num_bytes": 10190765, "num_examples": 3291, "dataset_name": "code_x_glue_ct_code_to_text"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/javascript.zip": {"num_bytes": 1664713350, "checksum": "fdc743f5af27f90c77584a2d29e2b7f8cecdd00c37b433c385b888ee062936dd"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text/dataset.zip": {"num_bytes": 12396864, "checksum": "31ec750805302ecd71b278a492d23d2ac916269f7ec645bba4f23b6f7c4bf217"}}, "download_size": 1677110214, "post_processing_size": null, "dataset_size": 181388904, "size_in_bytes": 1858499118}, "php": {"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n- Remove examples that codes cannot be parsed into an abstract syntax tree.\n- Remove examples that #tokens of documents is < 3 or >256\n- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n- Remove examples that documents are not English.\n", "citation": "@article{husain2019codesearchnet,\ntitle={Codesearchnet challenge: Evaluating the state of semantic code search},\nauthor={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\njournal={arXiv preprint arXiv:1909.09436},\nyear={2019}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "repo": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "original_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "docstring": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "sha": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "docstring", "output": "docstring_tokens"}, "task_templates": null, "builder_name": "code_x_glue_ct_code_to_text", "config_name": "php", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 614655799, "num_examples": 241241, "dataset_name": "code_x_glue_ct_code_to_text"}, "validation": {"name": "validation", "num_bytes": 33283149, "num_examples": 12982, "dataset_name": "code_x_glue_ct_code_to_text"}, "test": {"name": "test", "num_bytes": 35375097, "num_examples": 14014, "dataset_name": "code_x_glue_ct_code_to_text"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/php.zip": {"num_bytes": 851894048, "checksum": "c3bbf0d1b10010f88b058faea876f1f5471758399e30d58c11f78ff53660ce00"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text/dataset.zip": {"num_bytes": 12396864, "checksum": "31ec750805302ecd71b278a492d23d2ac916269f7ec645bba4f23b6f7c4bf217"}}, "download_size": 864290912, "post_processing_size": null, "dataset_size": 683314045, "size_in_bytes": 1547604957}, "python": {"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n- Remove examples that codes cannot be parsed into an abstract syntax tree.\n- Remove examples that #tokens of documents is < 3 or >256\n- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n- Remove examples that documents are not English.\n", "citation": "@article{husain2019codesearchnet,\ntitle={Codesearchnet challenge: Evaluating the state of semantic code search},\nauthor={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\njournal={arXiv preprint arXiv:1909.09436},\nyear={2019}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "repo": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "original_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "docstring": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "sha": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "docstring", "output": "docstring_tokens"}, "task_templates": null, "builder_name": "code_x_glue_ct_code_to_text", "config_name": "python", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 813664500, "num_examples": 251820, "dataset_name": "code_x_glue_ct_code_to_text"}, "validation": {"name": "validation", "num_bytes": 46888668, "num_examples": 13914, "dataset_name": "code_x_glue_ct_code_to_text"}, "test": {"name": "test", "num_bytes": 50659792, "num_examples": 14918, "dataset_name": "code_x_glue_ct_code_to_text"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip": {"num_bytes": 940909997, "checksum": "7223c6460bebfa85697b586da91e47bc5d64790a4d60bba5917106458ab6b40e"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text/dataset.zip": {"num_bytes": 12396864, "checksum": "31ec750805302ecd71b278a492d23d2ac916269f7ec645bba4f23b6f7c4bf217"}}, "download_size": 953306861, "post_processing_size": null, "dataset_size": 911212960, "size_in_bytes": 1864519821}, "ruby": {"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text\n\nThe dataset we use comes from CodeSearchNet and we filter the dataset as the following:\n- Remove examples that codes cannot be parsed into an abstract syntax tree.\n- Remove examples that #tokens of documents is < 3 or >256\n- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)\n- Remove examples that documents are not English.\n", "citation": "@article{husain2019codesearchnet,\ntitle={Codesearchnet challenge: Evaluating the state of semantic code search},\nauthor={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},\njournal={arXiv preprint arXiv:1909.09436},\nyear={2019}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "repo": {"dtype": "string", "id": null, "_type": "Value"}, "path": {"dtype": "string", "id": null, "_type": "Value"}, "func_name": {"dtype": "string", "id": null, "_type": "Value"}, "original_string": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "code": {"dtype": "string", "id": null, "_type": "Value"}, "code_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "docstring": {"dtype": "string", "id": null, "_type": "Value"}, "docstring_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "sha": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "docstring", "output": "docstring_tokens"}, "task_templates": null, "builder_name": "code_x_glue_ct_code_to_text", "config_name": "ruby", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 51956595, "num_examples": 24927, "dataset_name": "code_x_glue_ct_code_to_text"}, "validation": {"name": "validation", "num_bytes": 2821089, "num_examples": 1400, "dataset_name": "code_x_glue_ct_code_to_text"}, "test": {"name": "test", "num_bytes": 2671603, "num_examples": 1261, "dataset_name": "code_x_glue_ct_code_to_text"}}, "download_checksums": {"https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/ruby.zip": {"num_bytes": 111758028, "checksum": "67aee5812d0f994df745c771c7791483f2b060561495747d424e307af4b342e6"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text/dataset.zip": {"num_bytes": 12396864, "checksum": "31ec750805302ecd71b278a492d23d2ac916269f7ec645bba4f23b6f7c4bf217"}}, "download_size": 124154892, "post_processing_size": null, "dataset_size": 57449287, "size_in_bytes": 181604179}}
|
dummy/go/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:420fa7dff247ff60fa907fc60b9ea10976ae28ce1f1e0656eeb20de7df7cb4b3
|
3 |
+
size 7488
|
dummy/java/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1f0ae91d211dce7a40d3d30f27f799ad548878ae3ac848dfc2a841f2f1fbcd7a
|
3 |
+
size 8878
|
dummy/javascript/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2b429f8c3f015624f449a3451f7c01cad7fe0cdf2a891f336c1d0077b84f54f1
|
3 |
+
size 11378
|
dummy/php/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:963fd71a278bb122126ca6f1b9e266f3549b611d2910210a61c0919916578d7e
|
3 |
+
size 7911
|
dummy/python/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:116f265b6704feaac7f1c6de13329cbe28d8edfffa474a5fd267056dbcf03be4
|
3 |
+
size 14946
|
dummy/ruby/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:98260173829506a84cb4e9f3a849787311335ae974831671fe9c8e2fc7864d18
|
3 |
+
size 9844
|
generated_definitions.py
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
DEFINITIONS = {
|
2 |
+
"go": {
|
3 |
+
"class_name": "CodeXGlueCtCodeToText",
|
4 |
+
"dataset_type": "Code-Text",
|
5 |
+
"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
6 |
+
"dir_name": "code-to-text",
|
7 |
+
"name": "go",
|
8 |
+
"parameters": {"language": "go"},
|
9 |
+
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
10 |
+
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text",
|
11 |
+
"sizes": {"test": 8122, "train": 167288, "validation": 7325},
|
12 |
+
},
|
13 |
+
"java": {
|
14 |
+
"class_name": "CodeXGlueCtCodeToText",
|
15 |
+
"dataset_type": "Code-Text",
|
16 |
+
"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
17 |
+
"dir_name": "code-to-text",
|
18 |
+
"name": "java",
|
19 |
+
"parameters": {"language": "java"},
|
20 |
+
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
21 |
+
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text",
|
22 |
+
"sizes": {"test": 10955, "train": 164923, "validation": 5183},
|
23 |
+
},
|
24 |
+
"javascript": {
|
25 |
+
"class_name": "CodeXGlueCtCodeToText",
|
26 |
+
"dataset_type": "Code-Text",
|
27 |
+
"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
28 |
+
"dir_name": "code-to-text",
|
29 |
+
"name": "javascript",
|
30 |
+
"parameters": {"language": "javascript"},
|
31 |
+
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
32 |
+
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text",
|
33 |
+
"sizes": {"test": 3291, "train": 58025, "validation": 3885},
|
34 |
+
},
|
35 |
+
"php": {
|
36 |
+
"class_name": "CodeXGlueCtCodeToText",
|
37 |
+
"dataset_type": "Code-Text",
|
38 |
+
"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
39 |
+
"dir_name": "code-to-text",
|
40 |
+
"name": "php",
|
41 |
+
"parameters": {"language": "php"},
|
42 |
+
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
43 |
+
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text",
|
44 |
+
"sizes": {"test": 14014, "train": 241241, "validation": 12982},
|
45 |
+
},
|
46 |
+
"python": {
|
47 |
+
"class_name": "CodeXGlueCtCodeToText",
|
48 |
+
"dataset_type": "Code-Text",
|
49 |
+
"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
50 |
+
"dir_name": "code-to-text",
|
51 |
+
"name": "python",
|
52 |
+
"parameters": {"language": "python"},
|
53 |
+
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
54 |
+
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text",
|
55 |
+
"sizes": {"test": 14918, "train": 251820, "validation": 13914},
|
56 |
+
},
|
57 |
+
"ruby": {
|
58 |
+
"class_name": "CodeXGlueCtCodeToText",
|
59 |
+
"dataset_type": "Code-Text",
|
60 |
+
"description": "CodeXGLUE code-to-text dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
61 |
+
"dir_name": "code-to-text",
|
62 |
+
"name": "ruby",
|
63 |
+
"parameters": {"language": "ruby"},
|
64 |
+
"project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Text/code-to-text",
|
65 |
+
"raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Text/code-to-text",
|
66 |
+
"sizes": {"test": 1261, "train": 24927, "validation": 1400},
|
67 |
+
},
|
68 |
+
}
|