Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Tags:
code
Libraries:
Datasets
License:
File size: 4,482 Bytes
8ecb6f4
 
ad9d68c
 
 
 
 
 
 
 
 
 
 
ce772a7
 
 
8ecb6f4
ad9d68c
 
 
 
 
 
 
 
 
b8e22ba
ad9d68c
 
 
323690b
ad9d68c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae6d579
f19ab39
ad9d68c
 
 
 
 
 
 
 
 
 
 
 
 
ae6d579
 
 
 
 
 
804b09a
 
 
ae6d579
 
 
 
 
 
 
 
 
 
 
 
 
f19ab39
ad9d68c
 
 
87dc8cb
ad9d68c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- translation
language:
- en
tags:
- code
pretty_name: BabelCode Transcoder
size_categories:
- 1K<n<10K
source_datasets:
  - original
  - extended|transcoder
---
# Dataset Card for BabelCode Transcoder

## Dataset Description

- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)

### How To Use This Dataset

To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval).

### Dataset Summary

The [Transcoder](https://github.com/facebookresearch/CodeGen) dataset in BabelCode format. Currently supports translation from C++ and Python.

### Supported Tasks and Leaderboards

### Languages
BC-Transcoder supports:
* C++
* C#
* Dart
* Go
* Haskell
* Java
* Javascript
* Julia
* Kotlin
* Lua
* PHP
* Python
* R
* Rust
* Scala
* TypeScript

## Dataset Structure

```python
>>> from datasets import load_dataset
>>> load_dataset("gabeorlanski/bc-transcoder")
DatasetDict({
    test: Dataset({
        features: ['qid', 'title', 'language', 'signature', 'arguments', 'source_py', 'source_cpp', 'question_info'],
        num_rows: 8384
    })
})
```

### Data Fields

- `qid`: The question ID used for running tests.
- `title`: The title of the question.
- `language`: The programming language of the example.
- `signature`: The signature for the problem.
- `arguments`: The arguments of the problem.
- `source_py`: The source solution in Python.
- `source_cpp`: The source in C++.
- `question_info`: The dict of information used for executing predictions. It has the keys:
  - `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction.  
  - `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads`
  - `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not.
  - `entry_fn_name`: The function's name to use an entry point.
  - `entry_cls_name`: The class name to use an entry point. 
  - `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename.
  - `timeouts`: The default timeouts for each command.
  - `extension`: The extension for the prediction file.

**NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("gabeorlanski/bc-mbpp")['test']
>>> # The original entry_fn_name
>>> ds[0]['question_info']['entry_fn_name']
removeOcc
>>> # You MUST update the corresponding entry_fn_name
>>> ds[0]['question_info']['entry_fn_name'] = 'f'
>>> ds[0]['question_info']['entry_fn_name']
f
```

## Dataset Creation
See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated. 

For information on the original curation of the Transcoder Dataset, please see [Unsupervised Translation of Programming Languages](https://arxiv.org/pdf/2006.03511.pdf) by Roziere et. al.

### Dataset Curators
Google Research

### Licensing Information
CC-BY-4.0

### Citation Information
```
@article{orlanski2023measuring,
  title={Measuring The Impact Of Programming Language Distribution},
  author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele},
  journal={arXiv preprint arXiv:2302.01973},
  year={2023}
}
@article{roziere2020unsupervised,
  title={Unsupervised translation of programming languages},
  author={Roziere, Baptiste and Lachaux, Marie-Anne and Chanussot, Lowik and Lample, Guillaume},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}
```