File size: 8,688 Bytes
6fb6557 ebf12da 6fb6557 ebf12da 81ec333 3b5f705 81ec333 3b5f705 ff8d321 81ec333 3b5f705 81ec333 412bd11 85b5621 28c12e4 88deb3b dd1ee8c 88deb3b dd1ee8c 88deb3b 412bd11 88deb3b 49a8baa 3b5f705 88deb3b dd1ee8c 28c12e4 88deb3b 3b5f705 88deb3b 412bd11 88deb3b 3b5f705 88deb3b 3b5f705 88deb3b 988e221 88deb3b 81ec333 afe2f09 88deb3b 412bd11 88deb3b 0f6a23d ad2efca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 |
---
dataset_info:
- config_name: default
features:
- name: language
dtype: string
- name: id
dtype: int64
- name: repo_owner
dtype: string
- name: repo_name
dtype: string
- name: head_branch
dtype: string
- name: workflow_name
dtype: string
- name: workflow_filename
dtype: string
- name: workflow_path
dtype: string
- name: contributor
dtype: string
- name: sha_fail
dtype: string
- name: sha_success
dtype: string
- name: workflow
dtype: string
- name: logs
list:
- name: step_name
dtype: string
- name: log
dtype: string
- name: diff
dtype: string
- name: difficulty
dtype: int64
- name: changed_files
sequence: string
- name: commit_link
dtype: string
- name: commit_date
dtype: string
splits:
- name: test
num_bytes: 11705172.916666666
num_examples: 68
download_size: 1450880
dataset_size: 11705172.916666666
- config_name: old
features:
- name: language
dtype: string
- name: id
dtype: int64
- name: repo_owner
dtype: string
- name: repo_name
dtype: string
- name: head_branch
dtype: string
- name: workflow_name
dtype: string
- name: workflow_filename
dtype: string
- name: workflow_path
dtype: string
- name: contributor
dtype: string
- name: sha_fail
dtype: string
- name: sha_success
dtype: string
- name: workflow
dtype: string
- name: logs
list:
- name: step_name
dtype: string
- name: log
dtype: string
- name: diff
dtype: string
- name: difficulty
dtype: int64
- name: changed_files
sequence: string
- name: commit_link
dtype: string
- name: commit_date
dtype: string
splits:
- name: test
num_bytes: 24787425
num_examples: 144
download_size: 3550754
dataset_size: 24787425
configs:
- config_name: default
data_files:
- split: test
path: data/python/test-*
- config_name: old
data_files:
- split: test
path: old/python/test-*
---
# ๐๏ธ Long Code Arena (CI builds repair)
This is the benchmark for CI builds repair task as part of the
๐๏ธ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
๐ ๏ธ Task. Given the logs of a failed GitHub Actions workflow and the corresponding repository snapshot,
repair the repository contents in order to make the workflow pass.
All the data is collected from repositories published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.
To score your model on this dataset, you can use [**CI build repair benchmark**](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-builds-repair/ci-builds-repair-benchmark).
๐ฉ If you have any questions or requests concerning this dataset, please contact [email protected]
## How-to
### List all the available configs
via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names)
and choose an appropriate one.
Current configs: `python`
### Load the data
via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
from datasets import load_dataset
dataset = load_dataset("JetBrains-Research/lca-ci-builds-repair", split="test")
Note that all the data we have is considered to be in the test split.
**NOTE**: If you encounter any errors with loading the dataset on Windows, update the `datasets` library (was tested on `datasets==2.16.1`)
### Usage
For the dataset usage please refer to our [CI builds repair benchmark](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-builds-repair/ci-builds-repair-benchmark).
Its workflow is following:
1. Repairs repo by fix_repo_function function that utilizes repo state and logs of fails;
2. Sends the datapoints to GitHub to run workflows;
3. Requests results from GitHub;
4. Analyzes results and prints them;
5. Clones the necessary repos to the user's local machine.
The user should run their model to repair the failing CI workflows, and the benchmark will push commits to GitHub,
returning the results of the workflow runs for all the datapoints.
## Dataset Structure
This dataset contains logs of the failed GitHub Action workflows for some commits
followed by the commit that passes the workflow successfully.
Note that, unlike other ๐๏ธ Long Code Arena datasets, this dataset does not contain repositories.
### Datapoint Schema
Each example has the following fields:
| Field | Description |
|---------------------|------------------------------------------------------------------------------------------------------------------------------|
| `contributor` | Username of the contributor that committed changes |
| `difficulty` | Difficulty of the problem (assessor-based. 1 means that the repair requires only the code formatting) |
| `diff` | Contents of the diff between the failed and the successful commits |
| `head_branch` | Name of the original branch that the commit was pushed at |
| `id` | Unique ID of the datapoint |
| `language` | Main language of the repository |
| `logs` | List of dicts with keys `log` (logs of the failed job, particular step) and `step_name` (name of the failed step of the job) |
| `repo_name` | Name of the original repository (second part of the `owner/name` on GitHub) |
| `repo owner` | Owner of the original repository (first part of the `owner/name` on GitHub) |
| `sha_fail` | SHA of the failed commit |
| `sha_success` | SHA of the successful commit |
| `workflow` | Contents of the workflow file |
| `workflow_filename` | The name of the workflow file (without directories) |
| `workflow_name` | The name of the workflow |
| `workflow_path` | The full path to the workflow file
| `changed_files` | List of files changed in diff
| `commit_link` | URL to commit corresponding to failed job
### Datapoint Example
```
{'contributor': 'Gallaecio',
'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
'difficulty': '2',
'head_branch': 'component-getters',
'id': 18,
'language': 'Python',
'logs': [{'log': '##[group]Run pip install -U tox\n<...>',
'step_name': 'checks (3.12, pylint)/4_Run check.txt'}],
'repo_name': 'scrapy',
'repo_owner': 'scrapy',
'sha_fail': '0f71221cf9875ed8ef3400e1008408e79b6691e6',
'sha_success': 'c1ba9ccdf916b89d875628ba143dc5c9f6977430',
'workflow': 'name: Checks\non: [push, pull_request]\n\n<...>',
'workflow_filename': 'checks.yml',
'workflow_name': 'Checks',
'workflow_path': '.github/workflows/checks.yml',
'changed_files': ["scrapy/crawler.py"],
'commit_link': "https://github.com/scrapy/scrapy/tree/0f71221cf9875ed8ef3400e1008408e79b6691e6"}
```
## Citing
```
@article{bogomolov2024long,
title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models},
author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey},
journal={arXiv preprint arXiv:2406.11612},
year={2024}
}
```
You can find the paper [here](https://arxiv.org/abs/2406.11612).
|