Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 5,351 Bytes
93ce212
 
a0edd0c
93baf31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0edd0c
 
d98509d
a0edd0c
 
 
 
 
 
 
 
 
 
 
 
 
 
d98509d
a0edd0c
 
d98509d
a0edd0c
 
d98509d
a0edd0c
d98509d
 
a0edd0c
93baf31
 
 
 
 
 
 
 
a0edd0c
 
 
 
 
 
 
 
93ce212
1693598
 
 
 
 
 
 
 
 
465abe8
1693598
 
 
 
 
 
 
 
 
 
 
 
 
a42960f
 
 
1693598
a42960f
1693598
 
465abe8
bab6fe4
 
 
 
 
 
 
 
 
465abe8
bab6fe4
 
 
 
1693598
a42960f
12918c9
 
 
a42960f
12918c9
 
a42960f
 
 
 
 
 
12918c9
1693598
 
 
 
 
 
 
 
465abe8
1693598
 
92ff951
 
 
 
465abe8
92ff951
 
 
 
1693598
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
license: mit
dataset_info:
- config_name: default
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: question_chinese
    dtype: string
  - name: chain
    dtype: string
  - name: result
    dtype: string
  - name: result_float
    dtype: float64
  - name: equation
    dtype: string
  splits:
  - name: train
    num_bytes: 111988047
    num_examples: 195179
  - name: validation
    num_bytes: 1172933
    num_examples: 1783
  - name: test
    num_bytes: 1157061
    num_examples: 1785
  download_size: 50827709
  dataset_size: 114318041
- config_name: original-splits
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: question_chinese
    dtype: string
  - name: chain
    dtype: string
  - name: result
    dtype: string
  - name: result_float
    dtype: float64
  - name: equation
    dtype: string
  splits:
  - name: train
    num_bytes: 111988047
    num_examples: 195179
  - name: validation
    num_bytes: 2798479
    num_examples: 4867
  - name: test
    num_bytes: 2793355
    num_examples: 4867
  download_size: 52234086
  dataset_size: 117579881
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
- config_name: original-splits
  data_files:
  - split: train
    path: original-splits/train-*
  - split: validation
    path: original-splits/validation-*
  - split: test
    path: original-splits/test-*
---

# Dataset Card for "Calc-ape210k"


## Summary

This dataset is an instance of Ape210K dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)


## Supported Tasks

The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.


## Construction Process

First, we translated the questions into English using Google Translate. Next, we parsed the equations and the results. We linearized
the equations into a sequence of elementary steps and evaluated them using a sympy-based calculator. We numerically compare the output
with the result in the data and remove all examples where they do not match (less than 3% loss in each split). Finally, we save the
chain of steps in the HTML-like language in the `chain` column. We keep the original columns in the dataset for convenience. We also perform
in-dataset and cross-dataset data-leak detection within [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
Specifically for Ape210k, we removed parts of the validation and test split, with around 1700 remaining in each.

You can read more information about this process in our [Calc-X paper](https://arxiv.org/abs/2305.15017).


## Attributes

- `id` - id of the example
- `question` - the description of the math problem. Automatically translated from the `question_chinese` column into English using Google Translate
- `question_chinese` - description of the math problem in Chinese
- `chain` - linearized `equation`, sequence of arithmetic steps in HTML-like language that can be evaluated using our sympy-based calculator
- `result` - result as a string (can be an integer, float, or a fraction)
- `result_float` - result as a float
- `equation` - a nested expression that evaluates to the correct answer

Attributes `id`, `question`, `chain`, and `result` are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).



## Data splits

The default config contains filtered splits with data leaks removed.
You can load it using:

```python3
datasets.load_dataset("MU-NLPC/calc-ape210k")
```

In the `original-splits` config, the data splits are unfiltered and correspond to the original Ape210K dataset. See [ape210k dataset github](https://github.com/Chenny0808/ape210k) and [the paper](https://arxiv.org/abs/2009.11506) for more info.
You can load it using:

```python3
datasets.load_dataset("MU-NLPC/calc-ape210k", "original-splits")
```


## Licence

MIT, consistently with the original dataset.


## Cite

If you use this version of the dataset in research, please cite the [original Ape210k paper](https://arxiv.org/abs/2009.11506), and the [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows:

```bibtex
@inproceedings{kadlcik-etal-2023-soft,
    title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
    author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
    booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
    month = dec,
    year = "2023",
    address = "Singapore, Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/2305.15017",
}
```