Update README.md
Browse files
README.md
CHANGED
@@ -35,14 +35,11 @@ configs:
|
|
35 |
---
|
36 |
# 🏟️ Long Code Arena (Module Summarization)
|
37 |
|
38 |
-
This is the data for Module Summarization benchmark as part of Long Code Arena provided by Jetbrains Research.
|
39 |
|
40 |
## How-to
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
3. Load the data via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
|
46 |
|
47 |
```
|
48 |
from datasets import load_dataset
|
@@ -78,4 +75,4 @@ The data points could be removed upon request
|
|
78 |
|
79 |
To compare predicted and ground truth metrics we introduce the new metric based on LLM as an assessor. Our approach involves feeding the LLM with relevant code and two versions of documentation: the ground truth and the model-generated text. The LLM evaluates which documentation better explains and fits the code. To mitigate variance and potential ordering effects in model responses, we calculate the probability that the generated documentation is superior by averaging the results of two queries:
|
80 |
|
81 |
-
For more details about metric implementation go to [our github repository](https://github.com/JetBrains-Research/lca-baselines
|
|
|
35 |
---
|
36 |
# 🏟️ Long Code Arena (Module Summarization)
|
37 |
|
38 |
+
This is the data for Module Summarization benchmark as part of [Long Code Arena])(https://huggingface.co/spaces/JetBrains-Research/long-code-arena) provided by Jetbrains Research.
|
39 |
|
40 |
## How-to
|
41 |
|
42 |
+
Load the data via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
|
|
|
|
|
|
|
43 |
|
44 |
```
|
45 |
from datasets import load_dataset
|
|
|
75 |
|
76 |
To compare predicted and ground truth metrics we introduce the new metric based on LLM as an assessor. Our approach involves feeding the LLM with relevant code and two versions of documentation: the ground truth and the model-generated text. The LLM evaluates which documentation better explains and fits the code. To mitigate variance and potential ordering effects in model responses, we calculate the probability that the generated documentation is superior by averaging the results of two queries:
|
77 |
|
78 |
+
For more details about metric implementation go to [our github repository](https://github.com/JetBrains-Research/lca-baselines)
|