Update README.md
Browse files
README.md
CHANGED
@@ -44,9 +44,7 @@ license: mit
|
|
44 |
|
45 |
## 1. Introduction
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
A significant challenge in training LLMs for formal reasoning is the scarcity of data. To overcome this, we synthesize a large and diverse dataset by auto-formalizing a substantial corpus of informal mathematical problems. Our approach transforms natural language statements into various formal styles in Lean 4, resulting in 1.78 million syntactically correct and content-accurate statements. We then iteratively train a prover, alternating between generating verified proofs and training the model using these proofs. Our model, Goedel-Prover, achieves state-of-the-art performance across multiple benchmarks for whole-proof generation, which generates the entire proof without interacting with Lean. On the miniF2F benchmark (Pass@32), it attains a 57.6% success rate, surpassing the previous best open-source model by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), securing the top position on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean-workbook problems, nearly doubling the 15.7K produced by earlier works.
|
50 |
|
51 |
<p align="center">
|
52 |
<img width="100%" src="performance.png">
|
@@ -121,8 +119,13 @@ We are also releasing 29,7K proofs of the problems in Lean-workbook found by our
|
|
121 |
|
122 |
## 4. Citation
|
123 |
```latex
|
124 |
-
@
|
125 |
-
title={Goedel-Prover: A
|
126 |
author={Yong Lin and Shange Tang and Bohan Lyu and Jiayun Wu and Hongzhou Lin and Kaiyu Yang and Jia Li and Mengzhou Xia and Danqi Chen and Sanjeev Arora and Chi Jin},
|
|
|
|
|
|
|
|
|
|
|
127 |
}
|
128 |
```
|
|
|
44 |
|
45 |
## 1. Introduction
|
46 |
|
47 |
+
We introduce Goedel-Prover, an open-source large language model (LLM) that achieves the state-of-the-art (SOTA) performance in automated formal proof generation for mathematical problems. The key challenge in this field is the scarcity of formalized math statements and proofs, which we tackle in the following ways. We train statement formalizers to translate the natural language math problems from Numina into formal language (Lean 4), creating a dataset of 1.64 million formal statements. LLMs are used to check that the formal statements accurately preserve the content of the original natural language problems. We then iteratively build a large dataset of formal proofs by training a series of provers. Each prover succeeds in proving many statements that the previous ones could not, and these new proofs are added to the training set for the next prover. The final prover outperforms all existing open-source models in whole-proof generation. On the miniF2F benchmark, it achieves a 57.6% success rate (Pass@32), exceeding the previous best open-source model by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), ranking first on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the 15.7K produced by earlier works.
|
|
|
|
|
48 |
|
49 |
<p align="center">
|
50 |
<img width="100%" src="performance.png">
|
|
|
119 |
|
120 |
## 4. Citation
|
121 |
```latex
|
122 |
+
@misc{lin2025goedelproverfrontiermodelopensource,
|
123 |
+
title={Goedel-Prover: A Frontier Model for Open-Source Automated Theorem Proving},
|
124 |
author={Yong Lin and Shange Tang and Bohan Lyu and Jiayun Wu and Hongzhou Lin and Kaiyu Yang and Jia Li and Mengzhou Xia and Danqi Chen and Sanjeev Arora and Chi Jin},
|
125 |
+
year={2025},
|
126 |
+
eprint={2502.07640},
|
127 |
+
archivePrefix={arXiv},
|
128 |
+
primaryClass={cs.LG},
|
129 |
+
url={https://arxiv.org/abs/2502.07640},
|
130 |
}
|
131 |
```
|