Deterministic Reasoning Portability for the Era of Heterogeneous Intelligence.
CLO (Cross-LLM Orchestrator) is a specialized browser extension designed to eliminate context loss when switching between different LLM platforms. It acts as a "Reasoning State Virtual Machine", capturing the cognitive state of a conversation and enabling it to be "rehydrated" into any other model seamlessly.
- 🔄 Reasoning Portability: Transfer complex tasks between ChatGPT, Claude, Gemini, and Grok without repeating yourself.
- 🧠 Context Extraction: Automatically parses constraints, decisions, and artifacts from your conversation using a canonical JSON schema.
- ⚡ Zero-Friction Rehydration: Single-click "Inject" automatically compiles and adapts your state into the next model's input field.
- 📉 Semantic Compression: Intelligently compresses conversation history to save tokens while preserving critical reasoning paths.
- 🔗 Provenance Tracking: A "git log" for your thoughts — know exactly which model made which decision.
- 💎 Premium HUD: Minimalist, glassmorphism-inspired UI that stays out of your way until you need it.
Currently in Developer Preview. To install manually:
- Clone the Repo:
git clone https://github.com/your-repo/clo-extension.git cd clo-extension - Install Dependencies:
npm install
- Build the Project:
npm run build
- Open your browser and navigate to
chrome://extensions/. - Enable Developer mode (toggle in the top-right corner).
- Click Load unpacked.
- Select the
distfolder (created inside your project directory after running the build).
- Capture: Open any supported LLM (ChatGPT, Claude, or Gemini). Start your conversation as usual.
- Monitor: Notice the CLO HUD in the bottom-right corner. It tracks turns, detected models, and reasoning primitives in real-time.
- Export/Switch: When you reach a point where you want to switch models (e.g., from GPT-4o to Claude 3.5 Sonnet):
- Simply open the new platform.
- Click ⚡ Inject in the HUD.
- CLO will automatically compile the reasoning state and paste it into the prompt box.
- Continue: Hit Enter, and the new model will pick up exactly where the last one left off, aware of all previous constraints and decisions.
graph TD
A[LLM Platform DOM] -->|Capture| B(Platform Interceptors)
B -->|Structured Data| C{Background Orchestrator}
C -->|Persistence| D[(IndexedDB State Store)]
C -->|Stats/UI| E[Premium HUD / Popup]
C -->|Logic| F(Serializer + Compressor)
F -->|Compiled Prompt| G[Injection Engine]
G -->|DOM Write| H[Target LLM Input]
| Component | Responsibility |
|---|---|
| Interceptors | Platform-specific DOM observers (MutationObserver) |
| Serializer | Mapping raw text to the ReasoningState schema |
| Compressor | Semantic folding of long histories using model-aware logic |
| Injector | Adapting state into model-specific prompt frames |
| Platform | Status | Model Detection | Capture | Injection |
|---|---|---|---|---|
| ChatGPT | ✅ Active | ✅ | ✅ | ✅ |
| Claude | ✅ Active | ✅ | ✅ | ✅ |
| Gemini | ✅ Active | ✅ | ✅ | ✅ |
| Grok | 🟡 Beta | Minimal | 🚧 | ✅ |
We're building the future of deterministic reasoning. Contributions are welcome!
- Fork the repository.
- Create your feature branch (
git checkout -b feature/amazing-feature). - Commit your changes (
git commit -m 'Add amazing feature'). - Push to the branch (
push origin feature/amazing-feature). - Open a Pull Request.
Distributed under the MIT License. See LICENSE for more information.
Built with ❤️ for the LLM community.
By - Anurag