-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Hi Team
I try to deploy Wazuh-OpenClaw-Autopilot in a fully air-gapped environment and need clarification on the following:
Memory / Embeddings (Offline Requirement)
Current configuration uses:
"memory": {
"enabled": true,
"search": {
"provider": "openai",
"model": "text-embedding-3-small",
"hybrid": true
}
}
Since we cannot use OpenAI in air-gapped mode:
- Is there an officially supported open-source embedding solution?
- Can Ollama (e.g., nomic-embed-text) be used for embeddings?
- Is there support for local vector databases (Qdrant / Chroma / others)?
- What is the recommended memory configuration for fully offline deployment?
Fully Air-Gapped Ollama Deployment:
We want to:
- Use Ollama as the only LLM provider
- Disable all cloud providers
- Run completely offline
Please confirm:
- Is Ollama officially supported as primary + fallback model?
- Is embedding via Ollama supported?
- Can you provide a fully offline openclaw.json example?
Wazuh Version Compatibility
We are currently using Wazuh App version: 4.14.1
Issue:
Most MCP endpoints are not responding.
Questions:
- Is 4.14.1 officially supported?
- What Wazuh version is recommended for stable MCP integration?
- Are there known compatibility issues with 4.14.x?
Could you please consider adding an option in install.sh to allow installing only the MCP server or installing MCP together with Autopilot without Tailscale? Currently, even after setting AUTOPILOT_MODE=bootstrap and AUTOPILOT_REQUIRE_TAILSCALE=false in the .env file under the install directory, running ./install.sh still installs Tailscale. I would appreciate clarification on how to completely disable Tailscale installation, whether any modification in install.sh is required, and at what stage the openclaw.json file should be updated — after the default installation or by modifying install.sh before execution.
I already saw this https://github.com/gensecaihq/Wazuh-Openclaw-Autopilot/blob/main/docs/SCENARIOS.md but running ./install/install.sh --menu still give default installation.
Additionally, I would be grateful for detailed guidance on performing the installation using Ollama in a fully offline environment, including which Ollama models are recommended for Autopilot use cases and how to run the system locally without Tailscale for testing purposes. Finally, please let me know if there is an official discussion platform, such as Slack, Discord, or a forum, for Autopilot deployment and development support. Thank you for your support; your guidance on offline deployment, Ollama configuration, MCP compatibility, and installer customization would be greatly appreciated.
Thanks
nikopuf