A powerful Slack chatbot for Kubernetes cluster management, powered by AI. It allows you to interact with your clusters using natural language, execute commands, and explore resources through a conversational interface.
In many organizations, DevOps teams often become a bottleneck for developers who need to understand why their services aren't deploying correctly or why pods are crashing. Even when engineering teams have cluster access, the steep learning curve of Kubernetes can be daunting.
KubeAI Chatbot addresses these pain points by:
- Empowering Developers: Acting as an "on-demand DevOps partner" in Slack, helping teams troubleshoot and learn Kubernetes in real-time.
- Reducing DevOps Fatigue: Handling routine status checks and diagnostic questions, allowing DevOps engineers to focus on higher-value infrastructure work.
- Bridging the Knowledge Gap: Translating complex Kubernetes states into understandable natural language and actionable insights.
- Natural Language K8s: Manage your clusters by simply chatting in Slack.
- AI-Powered Command Generation: Automatically generates and executes
kubectlcommands based on your requests. - Slack Native UI:
- Built for Slack: Converts conversations into Slack's built-in style.
- Tool Visibility: Automatically wraps command descriptions in code blocks for clarity.
- Snippet Support: Automatically uploads long responses as text snippets to keep channels clean.
- Enterprise Safety Controls:
- Zero-Trust Secrets: Strict, hardcoded blocking of any attempts to retrieve or list Kubernetes secrets.
- Modification Guard: Prevent accidental resource modifications with the
AUTOMATIC_MODIFY_RESOURCESsafety switch.
- Multi-Cloud Ready: Support for GKE (with auth plugin), EKS, and standard clusters.
helm install kubeai-chatbot ./charts/kubeai-chartbot \
--set env.SLACK_BOT_TOKEN="xoxb-..." \
--set env.SLACK_SIGNING_SECRET="..." \
--set env.GEMINI_API_KEY="..."The easiest way to set up your Slack app is using the provided manifest:
- Go to api.slack.com/apps.
- Create a new app From a manifest.
- Copy the contents of
docs/slack_app_manifest.yamland paste it into the editor. - Update the
request_urlto your hosted environment's/slack/eventsendpoint.
| Variable | Description | Default |
|---|---|---|
SLACK_BOT_TOKEN |
Slack Bot User OAuth Token | Required |
SLACK_SIGNING_SECRET |
Slack app Signing Secret | Required |
AUTOMATIC_MODIFY_RESOURCES |
Enable/Disable AI's ability to run write commands | false |
KUBECONFIG |
Path to your kubeconfig file | $HOME/.kube/config |
LISTEN_ADDRESS |
Address for the bot to listen on | 0.0.0.0:8888 |
AUTH_METHOD |
Auth method (SAML, OIDC, or NONE) |
NONE |
SESSION_TYPE |
Session storage (postgres, file, memory) |
memory |
LOG_LEVEL |
Verbosity of logs (e.g., 2 for info) |
1 |
| Variable | Description | Default |
|---|---|---|
LLM_PROVIDER |
Legacy LLM service provider (gemini, openai) |
gemini |
MODEL_ID |
Specific LLM model to use | gemini-3-flash-preview |
LLM_SKIP_VERIFY_SSL |
Skip SSL certificate verification (set to 1 or true) |
false |
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
OpenAI API authentication key | Required |
OPENAI_ENDPOINT |
Custom OpenAI endpoint URL | Optional |
OPENAI_API_BASE |
Base URL for OpenAI API | Optional |
OPENAI_MODEL |
Default model to use for OpenAI | Optional |
OPENAI_USE_RESPONSES_API |
Use OpenAI responses API (set to true) |
false |
| Variable | Description | Default |
|---|---|---|
AZURE_OPENAI_ENDPOINT |
Azure OpenAI endpoint URL | Required |
AZURE_OPENAI_API_KEY |
Azure OpenAI API authentication key | Required |
| Variable | Description | Default |
|---|---|---|
GEMINI_API_KEY |
Google Gemini API authentication key | Required |
| Variable | Description | Default |
|---|---|---|
GOOGLE_CLOUD_PROJECT |
GCP project ID for Vertex AI | Required |
GOOGLE_CLOUD_LOCATION |
GCP region/location for Vertex AI | Optional |
GOOGLE_CLOUD_REGION |
Alternative to GOOGLE_CLOUD_LOCATION | Optional |
| Variable | Description | Default |
|---|---|---|
GROK_API_KEY |
xAI Grok API authentication key | Required |
GROK_ENDPOINT |
Custom Grok endpoint URL | Optional |
| Variable | Description | Default |
|---|---|---|
LLAMACPP_HOST |
Host URL for LlamaCPP server | http://127.0.0.1:8080/ |
| Variable | Description | Default |
|---|---|---|
BEDROCK_MODEL |
Model identifier for AWS Bedrock | Claude Sonnet 4 |
KubeAI Chatbot supports optional enterprise-grade authentication. When enabled, it provides:
- Identity-First Access: Users must authenticate via your IdP (Identity Provider) before using the chatbot.
- Kube-Native RBAC: Sessions are mapped to Kubernetes identities, allowing the bot to perform actions using client impersonation (RBAC).
For detailed setup instructions, see:
KubeAI Chatbot is built with safety as a priority:
- Immutable Secrets: The bot is hardcoded to refuse any request involving
kubectl secrets. This prevention happens at both the LLM prompt level and the tool execution validator. - Confirmation Flow: By default,
AUTOMATIC_MODIFY_RESOURCESis set tofalse. The bot will generate resource-modifying commands but will not execute them, instead providing the command for you to run manually. - Use Secret Manager: Although KubeAI Chatbot is built with secret requests denied, it is strongly recommended to use a secret manager to store sensitive information such as API keys, tokens, and other credentials. piggy supports AWS Secret Manager and provides highly secure encapsulation without leaving any trace of the secret in Kubernetes.
This project is a derivative work based on kubectl-ai, originally developed by Google LLC.
- Original Project: kubectl-ai
- License: Apache License 2.0
- Attribution: See the NOTICE file for detailed derivative work modifications and attributions.
Copyright 2026 KongZ.
