Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
prompteus commited on
Commit
a42960f
1 Parent(s): 12918c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -98,22 +98,28 @@ This dataset presents in-context scenarios where models can outsource the comput
98
  First, we translated the questions into English using Google Translate. Next, we parsed the equations and the results. We linearized
99
  the equations into a sequence of elementary steps and evaluated them using a sympy-based calculator. We numerically compare the output
100
  with the result in the data and remove all examples where they do not match (less than 3% loss in each split). Finally, we save the
101
- chain of steps the HTML-like language in the `chain` column. We keep the original columns in the dataset for convenience.
 
 
102
 
103
- You can read more information about this process in our [technical report](https://arxiv.org/abs/2305.15017).
104
 
105
 
106
  ## Content and Data splits
107
 
108
- In the `original-splits` config, the data splits correspond to the original Ape210K dataset. See [ape210k dataset github](https://github.com/Chenny0808/ape210k) and [the paper](https://arxiv.org/abs/2009.11506) for more info.
109
  You can load it using:
110
 
111
  ```python3
112
- datasets.load_dataset("MU-NLPC/calc-ape210k", "original-splits")
113
  ```
114
 
115
- The default config contains filtered splits that remove in-dataset and cross-dataset data leaks within [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
116
- In the case of Ape210k, we removed parts of the validation and test split, with around 1700 remaining in each. Refer to our paper for more details.
 
 
 
 
117
 
118
 
119
  Columns:
 
98
  First, we translated the questions into English using Google Translate. Next, we parsed the equations and the results. We linearized
99
  the equations into a sequence of elementary steps and evaluated them using a sympy-based calculator. We numerically compare the output
100
  with the result in the data and remove all examples where they do not match (less than 3% loss in each split). Finally, we save the
101
+ chain of steps in the HTML-like language in the `chain` column. We keep the original columns in the dataset for convenience. We also perform
102
+ in-dataset and cross-dataset data-leak detection within [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
103
+ Specifically for Ape210k, we removed parts of the validation and test split, with around 1700 remaining in each.
104
 
105
+ You can read more information about this process in our [Calc-X paper](https://arxiv.org/abs/2305.15017).
106
 
107
 
108
  ## Content and Data splits
109
 
110
+ The default config contains filtered splits with data leaks removed.
111
  You can load it using:
112
 
113
  ```python3
114
+ datasets.load_dataset("MU-NLPC/calc-ape210k")
115
  ```
116
 
117
+ In the `original-splits` config, the data splits are unfiltered and correspond to the original Ape210K dataset. See [ape210k dataset github](https://github.com/Chenny0808/ape210k) and [the paper](https://arxiv.org/abs/2009.11506) for more info.
118
+ You can load it using:
119
+
120
+ ```python3
121
+ datasets.load_dataset("MU-NLPC/calc-ape210k", "original-splits")
122
+ ```
123
 
124
 
125
  Columns: