Chinese artificial intelligence development company DeepSeek has introduced a new open-weight large language model (LLM) called Prover V2, released on April 30, 2025, via Hugging Face. Prover V2 features 671 billion parameters, significantly larger than earlier versions Prover V1 and V1.5, aimed at enhancing math proof verification. The model consolidates mathematical knowledge, enabling it to generate and verify proofs, which could benefit research and education. Prover V2 has been quantized to 8-bit floating point precision to reduce its size to approximately 650 gigabytes, allowing it to be executed on less powerful hardware. The earlier version, Prover V1, was based on the seven-billion-parameter DeepSeekMath model and used synthetic data for training. Comparatively, Prover V1.5 optimized training and execution, achieving better accuracy, but improvements in Prover V2 remain unclear. The open release of these weights has sparked debate over democratization versus potential misuse of AI models. This approach continues to make cutting-edge AI accessible to users without supercomputers, facilitated by advancements like model distillation and quantization.

Source 🔗