Goedel-Prover-V2

The Strongest Open-Source Theorem Prover to Date

*Core Contributor
1Princeton Language and Intelligence, Princeton University, 2Tsinghua University, 3NVIDIA
4Stanford University, 5Meta FAIR, 6Amazon, 7Shanghai Jiao Tong University, 8Peking University



Key Achievements

New SOTA on MiniF2F

+8.0% Pass@32

Our flagship 32B model significantly outperforms DeepSeek-Prover-V2-671B.

Small But Mighty

8B ≳ 671B

Our small 8B model matches prior SOTA 671B model at Pass@32.

#1 on PutnamBench

#1

Securing the top rank on the leaderboard.

Introduction

We introduce Goedel-Prover-V2, an open-source language model series that sets a new state-of-the-art in automated formal proof generation. Built on the standard expert iteration and reinforcement learning pipeline, our approach incorporates three key innovations: (1) Scaffolded data synthesis: We generate synthetic proof tasks of increasing difficulty to progressively train the model, enabling it to master increasingly complex theorems; (2) Verifier-guided self-correction: The model learns to iteratively revise its own proofs by leveraging feedback from Lean’s compiler, closely mimicking how humans refine their work; (3) Model averaging: We combine multiple model checkpoints to improve robustness and overall performance.

Our small model, Goedel-Prover-V2-8B, reaches 83.3% on MiniF2F test set at Pass@32, matching the performance of prior state-of-the-art DeepSeek-Prover-V2-671B while being nearly 100 times smaller in model size. Our flagship model, Goedel-Prover-V2-32B, achieves 88.1% on MiniF2F at Pass@32 on standard mode and 90.4% on self-correction mode, outperforming prior SOTA DeepSeek-Prover-V2-671B and concurrent work Kimina-Prover-72B by a large margin. Additionaly, our flagship model with self-correction solves 64 problems on PutnamBench at Pass@64, securing the 1st on the leaderboard surpassing DeepSeek-Prover-V2-671B's record of solving 47 problems by Pass@1024.

Benchmark Performance

Self-correction mode: Our model improves proof quality by first generating an initial candidate and then using Lean compiler feedback to iteratively revise it. We perform two rounds of self-correction, which remain computationally efficient—the total output length (including the initial proof and two revisions) increases only modestly from the standard 32K to 40K tokens.

MiniF2F Benchmark Performance
Figure 1: Pass@32 performance on MiniF2F, PutnamBench, and our new MathOlympiadBench containing 360 IMO-level problems.

The charts above demonstrate the state-of-the-art performance of Goedel-Prover-V2. We report all numbers at Pass@32: (1) Across all three datasets, our flagship 32B model, in both standard and self-correction mode, significantly outperforms prior state-of-the-art DeepSeek-Prover-V2-671B and Kimina-Prover-72B; (2) on miniF2F, our 8B model matches the performance of DeepSeek-Prover-V2-671B while being 100 times smaller in model size.

# Model Num-solved Pass
1Goedel-Prover-V2-32B (self-correction mode)64Pass@64
1Goedel-Prover-V2-32B (self-correction mode)57Pass@32
1Goedel-Prover-V2-32B 44Pass@32
2DeepSeek‑Prover‑V2-671B47Pass@1024
2DeepSeek‑Prover‑V2-671B22Pass@32
3DSP+23Pass@128
4Kimina‑Prover‑7B‑Distill10Pass@192
5Self-play Theorem Prover8Pass@3200
6Goedel-Prover-V17Pass@512
Table 1: PutnamBench leaderboard. Goedel-Prover-V2-32B secures the top rank with significantly less compute (pass number) than the previous state-of-the-art.

Inference-time Scaling

Inference Scale Performance
Figure 2: Performance on MiniF2F test set under different sample budgets.

The scaling curves above show that our 32B model consistently outperforms all prior state-of-the-art models across the entire range of inference-time compute budgets.

Our Methodology

Expert Iteration & RL

We use Qwen3-8B and 32B as the base model and follow the standard pipeline: formalizing problems, generating and verifying proofs, and using newly discovered correct proofs to train the next generation of models, with reinforcement learning providing additional improvements.

Scaffolded Data Synthesis

We automatically generate intermediate-level problems that bridge the gap between simple solved problems and complex unsolved problems. This results in a smoother difficulty progression and provides a denser, more informative training signal.

Verifier-Guided Self-Correction

We train the model to effectively use the Lean compilation feedback to iteratively correct its own proofs, integrating this task into both supervised fine-tuning and reinforcement learning.

Model Averaging

To prevent the loss of diversity in later training stages, we average trained checkpoints with the base model. This simple technique restores diversity and boosts Pass@K performance for larger K.

Model & Dataset Downloads

We release our Goedel-Prover-V2 models and the new MathOlympiadBench benchmark to foster future research.

Models

ModelDownload
Goedel-Prover-V2-32B🤗 HuggingFace
Goedel-Prover-V2-8B🤗 HuggingFace

Dataset

DatasetDownload
MathOlympiadBench🤗 HuggingFace

MathOlympiadBench (Math Olympiad Bench) comprises human-verified formalizations of Olympiad-level mathematical competition problems, sourced from Compfiles and IMOSLLean4 repository. MathOlympiadBench contains 360 problems, including 158 IMO problems from 1959 to 2024, 131 IMO shortlist problems covering 2006 to 2023, 68 regional mathematical Olympiad problems, and 3 additional mathematical puzzles.

This model is being released to aid other open-source projects, including those geared towards the upcoming IMO competition. A full paper with all details will be released in the coming weeks.

Citation

@misc{lin2025goedelproverv2,
    title={Goedel-Prover-V2: The Strongest Open-Source Theorem Prover to Date},
    author={Yong Lin and Shange Tang and Bohan Lyu and Ziran Yang and Jui-Hui Chung and Haoyu Zhao  and Lai Jiang and Yihan Geng and Jiawei Ge and Jingruo Sun and Jiayun Wu and Jiri Gesi and David Acuna and Kaiyu Yang and Hongzhou Lin  and Yejin Choi and Danqi Chen and Sanjeev Arora and Chi Jin},
    year={2025}
}