This document provides a list of models for proving that are available on Ollama.
This is the prover referenced in the main README. Again, to use it, you can pull it with the following command:
ollama pull wellecks/ntpctx-llama3-8band set 2 configuration variables in ~/.config/llmlean/config.toml:
api = "ollama"
model = "wellecks/ntpctx-llama3-8b"There are a few models available from Kimina, the collaboration between the Project Numina and Kimi teams.
Important configuration notes for Kimina models:
- These models were trained to use Markdown format for inputs and outputs, so you must set
prompt = "markdown"andresponseFormat = "markdown"
To download the model:
ollama pull BoltonBailey/Kimina-Prover-Distill-1.7BTo use it, set the following configuration variables in ~/.config/llmlean/config.toml:
api = "ollama"
model = "BoltonBailey/Kimina-Prover-Distill-1.7B"
prompt = "markdown"
responseFormat = "markdown"To download the model:
ollama pull BoltonBailey/Kimina-Prover-Distill-8BTo use it, set the following configuration variables in ~/.config/llmlean/config.toml:
api = "ollama"
model = "BoltonBailey/Kimina-Prover-Distill-8B"
prompt = "markdown"
responseFormat = "markdown"- Set
numSamplesto a small number: These models generate detailed reasoning chains, so using fewer samples is recommended for better performance
The BFS-Prover series (BFS-Prover-V2 and BFS-Prover-V1) are open-source step-level provers developed by ByteDance Seed team.
You can pull any verison of the BFS-Prover-V2 model directly via Ollama:
ollama pull zeyu-zheng/BFS-Prover-V2-7BOr pull the quantized q8_0 version:
ollama pull zeyu-zheng/BFS-Prover-V2-7B:q8_0and set configuration variables in ~/.config/llmlean/config.toml:
api = "ollama"
model = "zeyu-zheng/BFS-Prover-V2-7B"
mode = "parallel"
numSamples = "5"
prompt = "tacticstate"
responseFormat = "tactic"ollama pull zeyu-zheng/BFS-Prover-V2-32BOr pull the quantized f16 / q8_0 version:
ollama pull zeyu-zheng/BFS-Prover-V2-32B:f16
ollama pull zeyu-zheng/BFS-Prover-V2-32B:q8_0and set configuration variables in ~/.config/llmlean/config.toml:
api = "ollama"
model = "zeyu-zheng/BFS-Prover-V2-32B" or "zeyu-zheng/BFS-Prover-V2-32B:f16" or "zeyu-zheng/BFS-Prover-V2-32B:q8_0"
mode = "parallel"
numSamples = "5"
prompt = "tacticstate"
responseFormat = "tactic"- These models were trained to use the current tactic state as input and the preceding tactic as output, so you must set
prompt = "tacticstate",responseFormat = "tactic"andmode = "parallel". - You can increase
numSamples(e.g. 5, 10, …) to sample multiple tactics for each state. - Currently
llmqedfunctions the same asllmstep "".