|
| 1 | +# 🚀 OpenShift AI MCP Server - Practical Usage Guide |
| 2 | + |
| 3 | +## 📥 Installation |
| 4 | + |
| 5 | +### **Option 1: Main Package (Recommended)** |
| 6 | +```bash |
| 7 | +npm install -g kubernetes-mcp-server-openshift-ai |
| 8 | +``` |
| 9 | + |
| 10 | +### **Option 2: Platform-Specific** |
| 11 | +```bash |
| 12 | +# Linux AMD64 |
| 13 | +npm install -g kubernetes-mcp-server-openshift-ai-linux-amd64 |
| 14 | + |
| 15 | +# macOS ARM64 (Apple Silicon) |
| 16 | +npm install -g kubernetes-mcp-server-openshift-ai-darwin-arm64 |
| 17 | +``` |
| 18 | + |
| 19 | +### **Option 3: Direct Download** |
| 20 | +```bash |
| 21 | +curl -sSL https://raw.githubusercontent.com/macayaven/openshift-mcp-server/main/install-openshift-ai.sh | bash |
| 22 | +``` |
| 23 | + |
| 24 | +## 🔧 Configuration |
| 25 | + |
| 26 | +### **Basic Setup** |
| 27 | +```bash |
| 28 | +# Start with all toolsets (recommended) |
| 29 | +kubernetes-mcp-server --toolsets core,config,helm,openshift-ai |
| 30 | + |
| 31 | +# Start with specific toolsets |
| 32 | +kubernetes-mcp-server --toolsets core,openshift-ai |
| 33 | + |
| 34 | +# Check available toolsets |
| 35 | +kubernetes-mcp-server --help |
| 36 | +``` |
| 37 | + |
| 38 | +### **Kubernetes Configuration** |
| 39 | +```bash |
| 40 | +# Use specific kubeconfig |
| 41 | +kubernetes-mcp-server --kubeconfig ~/.kube/config |
| 42 | + |
| 43 | +# Use current context |
| 44 | +kubernetes-mcp-server --toolsets openshift-ai |
| 45 | + |
| 46 | +# Read-only mode (safe for production) |
| 47 | +kubernetes-mcp-server --read-only --toolsets openshift-ai |
| 48 | +``` |
| 49 | + |
| 50 | +## 🎯 Core Usage Scenarios |
| 51 | + |
| 52 | +### **Scenario 1: Data Science Project Management** |
| 53 | +```bash |
| 54 | +# Start server with OpenShift AI tools |
| 55 | +kubernetes-mcp-server --toolsets core,config,helm,openshift-ai |
| 56 | + |
| 57 | +# Now in your AI assistant (Claude, Cursor, etc.), you can: |
| 58 | +``` |
| 59 | + |
| 60 | +**Available Commands:** |
| 61 | +- `create_datascience_project` - Create new DS project |
| 62 | +- `list_datascience_projects` - List all projects |
| 63 | +- `get_datascience_project` - Get project details |
| 64 | +- `update_datascience_project` - Modify existing project |
| 65 | +- `delete_datascience_project` - Remove project |
| 66 | + |
| 67 | +**Example Workflow:** |
| 68 | +``` |
| 69 | +1. "Create a data science project called 'ml-experiments'" |
| 70 | +2. "List all data science projects" |
| 71 | +3. "Get details of the ml-experiments project" |
| 72 | +4. "Add a description to the ml-experiments project" |
| 73 | +``` |
| 74 | + |
| 75 | +### **Scenario 2: Model Management** |
| 76 | +```bash |
| 77 | +# Start server (same as above) |
| 78 | +kubernetes-mcp-server --toolsets core,openshift-ai |
| 79 | + |
| 80 | +# Available Model Commands: |
| 81 | +- `list_models` - List all models in project |
| 82 | +- `get_model` - Get model details |
| 83 | +- `create_model` - Deploy new model |
| 84 | +- `update_model` - Update model configuration |
| 85 | +- `delete_model` - Remove model |
| 86 | +``` |
| 87 | + |
| 88 | +**Example Workflow:** |
| 89 | +``` |
| 90 | +1. "List all models in the ml-experiments project" |
| 91 | +2. "Create a new PyTorch model with GPU support" |
| 92 | +3. "Update the model to use 2 GPU replicas" |
| 93 | +4. "Get current status of the PyTorch model" |
| 94 | +``` |
| 95 | + |
| 96 | +### **Scenario 3: Application Deployment** |
| 97 | +```bash |
| 98 | +# Start server |
| 99 | +kubernetes-mcp-server --toolsets core,openshift-ai |
| 100 | + |
| 101 | +# Application Commands: |
| 102 | +- `deploy_application` - Deploy new application |
| 103 | +- `list_applications` - List applications |
| 104 | +- `get_application` - Get app details |
| 105 | +- `delete_application` - Remove application |
| 106 | +``` |
| 107 | + |
| 108 | +**Example Workflow:** |
| 109 | +``` |
| 110 | +1. "Deploy a Streamlit application with 3 replicas" |
| 111 | +2. "List all applications in the project" |
| 112 | +3. "Get details of the Streamlit app" |
| 113 | +4. "Scale the application to 5 replicas" |
| 114 | +5. "Delete the application when done" |
| 115 | +``` |
| 116 | + |
| 117 | +### **Scenario 4: Experiment Management** |
| 118 | +```bash |
| 119 | +# Start server |
| 120 | +kubernetes-mcp-server --toolsets core,openshift-ai |
| 121 | + |
| 122 | +# Experiment Commands: |
| 123 | +- `run_experiment` - Execute new experiment |
| 124 | +- `list_experiments` - List all experiments |
| 125 | +- `get_experiment` - Get experiment details |
| 126 | +- `delete_experiment` - Remove experiment |
| 127 | +``` |
| 128 | + |
| 129 | +**Example Workflow:** |
| 130 | +``` |
| 131 | +1. "Run a training experiment with a PyTorch model" |
| 132 | +2. "List all experiments in the project" |
| 133 | +3. "Get results and logs of the training experiment" |
| 134 | +4. "Delete the experiment after analyzing results" |
| 135 | +``` |
| 136 | + |
| 137 | +### **Scenario 5: Pipeline Management** |
| 138 | +```bash |
| 139 | +# Start server |
| 140 | +kubernetes-mcp-server --toolsets core,openshift-ai |
| 141 | + |
| 142 | +# Pipeline Commands: |
| 143 | +- `run_pipeline` - Execute new pipeline |
| 144 | +- `list_pipelines` - List all pipelines |
| 145 | +- `get_pipeline` - Get pipeline details |
| 146 | +- `create_pipeline` - Create new pipeline |
| 147 | +- `delete_pipeline` - Remove pipeline |
| 148 | +``` |
| 149 | + |
| 150 | +**Example Workflow:** |
| 151 | +``` |
| 152 | +1. "Create a new ML pipeline for data preprocessing" |
| 153 | +2. "Run the pipeline with the latest dataset" |
| 154 | +3. "List all pipelines and their status" |
| 155 | +4. "Get the execution logs of the preprocessing pipeline" |
| 156 | +5. "Delete the pipeline after completion" |
| 157 | +``` |
| 158 | + |
| 159 | +## 🛠️ Advanced Usage |
| 160 | + |
| 161 | +### **Multi-Cluster Management** |
| 162 | +```bash |
| 163 | +# Work with multiple Kubernetes clusters |
| 164 | +kubernetes-mcp-server --toolsets core,config,openshift-ai |
| 165 | + |
| 166 | +# Switch between clusters using context tools |
| 167 | +``` |
| 168 | + |
| 169 | +### **Helm Integration** |
| 170 | +```bash |
| 171 | +# Include Helm tools |
| 172 | +kubernetes-mcp-server --toolsets core,helm,openshift-ai |
| 173 | + |
| 174 | +# Helm Commands Available: |
| 175 | +- `list_helm_releases` |
| 176 | +- `get_helm_release` |
| 177 | +- `install_helm_chart` |
| 178 | +- `upgrade_helm_release` |
| 179 | +- `uninstall_helm_release` |
| 180 | +``` |
| 181 | + |
| 182 | +### **Production Safety** |
| 183 | +```bash |
| 184 | +# Read-only mode (no destructive operations) |
| 185 | +kubernetes-mcp-server --read-only --toolsets openshift-ai |
| 186 | + |
| 187 | +# Disable destructive tools |
| 188 | +kubernetes-mcp-server --disable-destructive --toolsets core,openshift-ai |
| 189 | +``` |
| 190 | + |
| 191 | +## 🔍 Integration with AI Assistants |
| 192 | + |
| 193 | +### **Claude Desktop Integration** |
| 194 | +1. Install Claude Desktop |
| 195 | +2. Go to Settings → MCP Servers |
| 196 | +3. Add new server: |
| 197 | + - **Name**: OpenShift AI Server |
| 198 | + - **Command**: `kubernetes-mcp-server --toolsets core,config,helm,openshift-ai` |
| 199 | +4. Enable the server |
| 200 | +5. Start chatting with Claude about your OpenShift resources |
| 201 | + |
| 202 | +### **Cursor Integration** |
| 203 | +1. Open Cursor |
| 204 | +2. Go to Settings → AI Tools |
| 205 | +3. Add MCP Server: |
| 206 | + - **Name**: OpenShift AI |
| 207 | + - **Path**: `/usr/local/bin/kubernetes-mcp-server` |
| 208 | + - **Args**: `--toolsets core,config,helm,openshift-ai` |
| 209 | +4. Restart Cursor |
| 210 | + |
| 211 | +### **GitHub Copilot Integration** |
| 212 | +1. Install GitHub Copilot extension |
| 213 | +2. Configure MCP server in settings |
| 214 | +3. Use same command as above |
| 215 | + |
| 216 | +## 💡 Real-World Examples |
| 217 | + |
| 218 | +### **Example 1: ML Project Setup** |
| 219 | +``` |
| 220 | +You: "I want to create a new ML project for fraud detection" |
| 221 | +
|
| 222 | +AI: "I'll help you create a data science project for fraud detection. |
| 223 | +[Creates project with appropriate resources] |
| 224 | +
|
| 225 | +Your fraud detection project is now ready! Here's what I created: |
| 226 | +- Project: fraud-detection-ml |
| 227 | +- Namespace: fraud-detection-ml |
| 228 | +- Storage: 100Gi PVC |
| 229 | +- Service Account: fraud-detection-sa |
| 230 | +
|
| 231 | +Next steps: |
| 232 | +1. Deploy your training data |
| 233 | +2. Set up model serving |
| 234 | +3. Configure monitoring |
| 235 | +Would you like me to help with any of these?" |
| 236 | +``` |
| 237 | + |
| 238 | +### **Example 2: Model Deployment** |
| 239 | +``` |
| 240 | +You: "Deploy a BERT model for text classification" |
| 241 | +
|
| 242 | +AI: "I'll deploy a BERT model for text classification in your project. |
| 243 | +[Deploys model with appropriate resources] |
| 244 | +
|
| 245 | +BERT model deployed successfully! Details: |
| 246 | +- Model: bert-text-classifier |
| 247 | +- Framework: PyTorch |
| 248 | +- Replicas: 2 |
| 249 | +- Memory: 4Gi each |
| 250 | +- Endpoint: https://bert-text-classifier.apps.cluster.com |
| 251 | +
|
| 252 | +The model is ready for inference requests!" |
| 253 | +``` |
| 254 | + |
| 255 | +### **Example 3: Batch Experiment** |
| 256 | +``` |
| 257 | +You: "Run a hyperparameter tuning experiment" |
| 258 | +
|
| 259 | +AI: "I'll set up a hyperparameter tuning experiment for your model. |
| 260 | +[Creates experiment with multiple parameter combinations] |
| 261 | +
|
| 262 | +Hyperparameter experiment started! Details: |
| 263 | +- Experiment: bert-hp-tuning-v2 |
| 264 | +- Parameters: learning_rate, batch_size, epochs |
| 265 | +- Trials: 27 combinations |
| 266 | +- Max parallel: 3 trials |
| 267 | +
|
| 268 | +The experiment will run all combinations and report the best configuration." |
| 269 | +``` |
| 270 | + |
| 271 | +## 🔧 Troubleshooting |
| 272 | + |
| 273 | +### **Common Issues** |
| 274 | +```bash |
| 275 | +# Check if server is working |
| 276 | +kubernetes-mcp-server --version |
| 277 | + |
| 278 | +# Test specific toolset |
| 279 | +kubernetes-mcp-server --toolsets openshift-ai --help |
| 280 | + |
| 281 | +# Check connectivity |
| 282 | +kubectl cluster-info |
| 283 | + |
| 284 | +# Verify OpenShift AI access |
| 285 | +oc get datascienceprojects |
| 286 | +``` |
| 287 | + |
| 288 | +### **Debug Mode** |
| 289 | +```bash |
| 290 | +# Enable verbose logging |
| 291 | +kubernetes-mcp-server --log-level 9 --toolsets openshift-ai |
| 292 | + |
| 293 | +# Test with dry-run |
| 294 | +kubernetes-mcp-server --toolsets core,openshift-ai --help |
| 295 | +``` |
| 296 | + |
| 297 | +## 📚 Next Steps |
| 298 | + |
| 299 | +### **Learning Resources** |
| 300 | +- OpenShift AI Documentation: https://docs.redhat.com/en-us/openshift_ai/ |
| 301 | +- Kubernetes Documentation: https://kubernetes.io/docs/ |
| 302 | +- MCP Documentation: https://modelcontextprotocol.io/ |
| 303 | + |
| 304 | +### **Community** |
| 305 | +- GitHub Repository: https://github.com/macayaven/openshift-mcp-server |
| 306 | +- Issues: Report bugs or request features |
| 307 | +- Discussions: Ask questions and share workflows |
| 308 | + |
| 309 | +--- |
| 310 | + |
| 311 | +**🎉 You now have a complete OpenShift AI MCP server with 28 tools for full ML lifecycle management!** |
0 commit comments