Update README.md
Browse files
README.md
CHANGED
@@ -39,9 +39,13 @@ task_categories:
|
|
39 |
language:
|
40 |
- en
|
41 |
---
|
42 |
-
# 🏟️ Long Code Arena (Module
|
43 |
|
44 |
-
This is the
|
|
|
|
|
|
|
|
|
45 |
|
46 |
## How-to
|
47 |
|
@@ -60,25 +64,20 @@ Each example has the following fields:
|
|
60 |
| **Field** | **Description** |
|
61 |
|:---------------------------:|:----------------------------------------:|
|
62 |
| `repo` | Name of the repository |
|
63 |
-
| `target_text` | Text of the target documentation file
|
64 |
| `docfile_name` | Name of the file with target documentation |
|
65 |
-
| `intent` | One sentence description of what expected in the documentation |
|
66 |
| `license` | License of the target repository |
|
67 |
-
| `relevant_code_files` |
|
68 |
-
| `relevant_code_dir` |
|
69 |
| `path_to_docfile` | Path to file with documentation (path to the documentation file in source repository) |
|
70 |
| `relevant_code_context` | Relevant code context collected from relevant code files and directories |
|
71 |
|
72 |
|
73 |
-
Note
|
74 |
-
|
75 |
-
## Licences
|
76 |
-
|
77 |
-
We extracted repositories with permissive license (we used the most popular permissive licenses --- MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause).
|
78 |
-
The data points could be removed upon request
|
79 |
|
80 |
## Metric
|
81 |
|
82 |
-
To compare predicted and ground truth
|
83 |
|
84 |
-
For more details about metric implementation
|
|
|
39 |
language:
|
40 |
- en
|
41 |
---
|
42 |
+
# 🏟️ Long Code Arena (Module summarization)
|
43 |
|
44 |
+
This is the benchmark for Module summarization task as part of the
|
45 |
+
🏟️ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
|
46 |
+
The current version includes 216 manually curated text files describing different documentation of open-source permissive Python projects.
|
47 |
+
The model is required to generate such description, given the relevant context code and the intent behind the documentation.
|
48 |
+
All the repositories are published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.
|
49 |
|
50 |
## How-to
|
51 |
|
|
|
64 |
| **Field** | **Description** |
|
65 |
|:---------------------------:|:----------------------------------------:|
|
66 |
| `repo` | Name of the repository |
|
67 |
+
| `target_text` | Text of the target documentation file |
|
68 |
| `docfile_name` | Name of the file with target documentation |
|
69 |
+
| `intent` | One sentence description of what is expected in the documentation |
|
70 |
| `license` | License of the target repository |
|
71 |
+
| `relevant_code_files` | Paths to relevant code files (files that are mentioned in target documentation) |
|
72 |
+
| `relevant_code_dir` | Paths to relevant code directories (directories that are mentioned in target documentation) |
|
73 |
| `path_to_docfile` | Path to file with documentation (path to the documentation file in source repository) |
|
74 |
| `relevant_code_context` | Relevant code context collected from relevant code files and directories |
|
75 |
|
76 |
|
77 |
+
**Note**: you may collect and use your own relevant context. Our context may not be suitable. Zipped repositories can be found the `repos` directory.
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
## Metric
|
80 |
|
81 |
+
To compare the predicted documentation and the ground truth documentation, we introduce the new metric based on LLM as an assessor. Our approach involves feeding the LLM with relevant code and two versions of documentation: the ground truth and the model-generated text. The LLM evaluates which documentation better explains and fits the code. To mitigate variance and potential ordering effects in model responses, we calculate the probability that the generated documentation is superior by averaging the results of two queries with the different order.
|
82 |
|
83 |
+
For more details about metric implementation, please refer to [our GitHub repository](https://github.com/JetBrains-Research/lca-baselines).
|