You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Cerebras Planning and Optimimization |`cepo`| Combines Best of N, Chain-of-Thought, Self-Reflection, Self-Improvement, and various prompting techniques |
218
+
| CoT with Reflection |`cot_reflection`| Implements chain-of-thought reasoning with \<thinking\>, \<reflection> and \<output\> sections |
219
+
| PlanSearch |`plansearch`| Implements a search algorithm over candidate plans for solving a problem in natural language |
220
+
| ReRead |`re2`| Implements rereading to improve reasoning by processing queries twice |
221
+
| Self-Consistency |`self_consistency`| Implements an advanced self-consistency method |
222
+
| Z3 Solver |`z3`| Utilizes the Z3 theorem prover for logical reasoning |
223
+
| R* Algorithm |`rstar`| Implements the R* algorithm for problem-solving |
224
+
| LEAP |`leap`| Learns task-specific principles from few shot examples |
225
+
| Round Trip Optimization |`rto`| Optimizes responses through a round-trip process |
226
+
| Best of N Sampling |`bon`| Generates multiple responses and selects the best one |
227
+
| Mixture of Agents |`moa`| Combines responses from multiple critiques |
228
+
| Monte Carlo Tree Search |`mcts`| Uses MCTS for decision-making in chat responses |
229
+
| PV Game |`pvg`| Applies a prover-verifier game approach at inference time |
230
+
| CoT Decoding | N/A for proxy | Implements chain-of-thought decoding to elicit reasoning without explicit prompting |
231
+
| Entropy Decoding | N/A for proxy | Implements adaptive sampling based on the uncertainty of tokens during generation |
# The Cerebras Planning and Optimization (CePO) Method
2
+
3
+
CePO is an inference-time computation method designed to enhance the accuracy of large language models (LLMs) on tasks requiring reasoning and planning, such as solving math or coding problems. It integrates several advanced techniques, including Best of N, Chain of Thought (CoT), Self-Reflection, Self-Improvement, and Prompt Engineering.
4
+
5
+
If you have any questions or want to contribute, please reach out to us on [cerebras.ai/discord](cerebras.ai/discord)
6
+
7
+
## CePO Methodology
8
+
9
+
In CePO, the Best of N technique is applied to `bestofn_n` solution candidates. Each solution is generated through the following four steps:
10
+
11
+
**Step 1**: Plan Generation
12
+
The model generates a detailed, step-by-step plan to solve the problem, along with its confidence level for each step.
13
+
14
+
**Step 2**: Initial Solution
15
+
Using the plan from Step 1, the model produces an initial solution.
16
+
17
+
Steps 1 and 2 are repeated `planning_n` times to generate multiple solution proposals.
18
+
If the model exceeds the token budget during Step 1 or 2, the plan/solution is marked as incomplete, rejected, and regenerated. A maximum of `planning_m` attempts is made to generate `planning_n` valid proposals.
19
+
20
+
**Step 3**: Plan Refinement
21
+
The model reviews all generated solution proposals and their associated plans, identifying inconsistencies. Based on this analysis, a refined, final step-by-step plan is constructed.
22
+
23
+
**Step 4**: Final Solution
24
+
The model uses the refined plan from Step 3 to produce the final answer.
25
+
26
+
## CePO Current Status
27
+
28
+
This project is a work in progress, and the provided code is in an early experimental stage. While the proposed approach works well across the benchmarks we tested, further improvements can be achieved by task-specific customizations to prompts.
29
+
30
+
## CePO Ablation studies
31
+
32
+
We conducted ablation studies to evaluate the impact of various hyperparameters in the CePO framework. Our results indicate that the chosen hyperparameter settings strike a good balance between computational cost and accuracy.
33
+
34
+
Interestingly, the self-critique and quality improvement capabilities of existing off-the-shelf models do not always scale proportionally with increased inference compute. Addressing this limitation remains a key focus, and we plan to explore custom model fine-tuning as a potential solution in the future.
0 commit comments