You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OpenReason can run the exact same classifier → skeleton → solver → verifier → finalizer flow through a LangGraph `StateGraph`. This mode is optional and opt-in via `graph.enabled`.
556
+
557
+
```ts
558
+
openreason.init({
559
+
provider: "openai",
560
+
apiKey: process.env.OPENAI_API_KEY!,
561
+
model: "gpt-4o",
562
+
graph: {
563
+
enabled: true,
564
+
checkpoint: true, // uses MemorySaver from @langchain/langgraph-checkpoint
565
+
threadPrefix: "demo", // helps group runs when checkpointing is on
566
+
},
567
+
});
568
+
```
569
+
570
+
What changes:
571
+
572
+
- Nodes mirror the standard pipeline (classify, cache, quick reflex, structure, solve, evaluate) but execute as a compiled LangGraph.
573
+
- When `checkpoint` is true, the built-in `MemorySaver` tracks progress per `threadPrefix`, letting you resume or inspect state.
574
+
- If anything fails or graph execution is disabled, OpenReason falls back to the linear pipeline automatically.
575
+
- All existing telemetry (memory cache, prompt evolution, verification metadata) remains intact, so no code changes are required when toggling the mode.
576
+
577
+
Use this when you want more explicit control over graph execution, need checkpointing, or plan to extend the LangGraph with additional nodes.
0 commit comments