license: mit
Dataset is imported from CodeXGLUE and pre-processed using their script.
Where to find in Semeru:
The dataset can be found at /nfs/semeru/semeru_datasets/code_xglue/code-to-text/go in Semeru
CodeXGLUE -- Code-To-Text
Task Definition
The task is to generate natural language comments for a code, and evaluted by smoothed bleu-4 score.
Dataset
The dataset we use comes from CodeSearchNet and we filter the dataset as the following:
- Remove examples that codes cannot be parsed into an abstract syntax tree.
- Remove examples that #tokens of documents is < 3 or >256
- Remove examples that documents contain special tokens (e.g. <img ...> or https:...)
- Remove examples that documents are not English.
Data Format
After preprocessing dataset, you can obtain three .jsonl files, i.e. train.jsonl, valid.jsonl, test.jsonl
For each file, each line in the uncompressed file represents one function. One row is illustrated below.
repo: the owner/repo
path: the full path to the original file
func_name: the function or method name
original_string: the raw string before tokenization or parsing
language: the programming language
code/function: the part of the
original_string
that is codecode_tokens/function_tokens: tokenized version of
code
docstring: the top-level comment or docstring, if it exists in the original string
docstring_tokens: tokenized version of
docstring
Data Statistic
Programming Language | Training | Dev | Test |
---|---|---|---|
Python | 251,820 | 13,914 | 14,918 |
PHP | 241,241 | 12,982 | 14,014 |
Go | 167,288 | 7,325 | 8,122 |
Java | 164,923 | 5,183 | 10,955 |
JavaScript | 58,025 | 3,885 | 3,291 |
Ruby | 24,927 | 1,400 | 1,261 |
Reference
@article{husain2019codesearchnet,
title={Codesearchnet challenge: Evaluating the state of semantic code search},
author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
journal={arXiv preprint arXiv:1909.09436},
year={2019}
}