diff --git a/.claude-plugin/marketplace.json b/.claude-plugin/marketplace.json index d2b3806..be0aaf9 100644 --- a/.claude-plugin/marketplace.json +++ b/.claude-plugin/marketplace.json @@ -32,12 +32,17 @@ { "name": "utils", "source": "./plugins/utils", - "description": "A generic utilities plugin serving as a catch-all for various helper commands" + "description": "Utility commands for common development and debugging tasks" }, { "name": "prow-job", "source": "./plugins/prow-job", "description": "A plugin to analyze and inspect Prow CI job results" + }, + { + "name": "openshift-qe", + "source": "./plugins/openshift-qe", + "description": "Generate and execute comprehensive OpenShift test cases with AI-powered analysis and step-by-step execution" } ] } diff --git a/plugins/openshift-qe/.claude-plugin/marketplace.json b/plugins/openshift-qe/.claude-plugin/marketplace.json new file mode 100644 index 0000000..4c3469b --- /dev/null +++ b/plugins/openshift-qe/.claude-plugin/marketplace.json @@ -0,0 +1,8 @@ +{ + "name": "openshift-qe", + "description": "Generate and execute comprehensive OpenShift test cases with AI-powered analysis and step-by-step execution", + "version": "0.0.1", + "author": { + "name": "openshift-qe" + } + } \ No newline at end of file diff --git a/plugins/openshift-qe/commands/analyze-must-gather.md b/plugins/openshift-qe/commands/analyze-must-gather.md new file mode 100644 index 0000000..c089dcd --- /dev/null +++ b/plugins/openshift-qe/commands/analyze-must-gather.md @@ -0,0 +1,488 @@ +--- +description: Analyze OpenShift must-gather bundles to identify root causes, bugs, and cluster health issues +argument-hint: [must-gather-path] +--- + +## Name +analyze-must-gather + +## Synopsis +``` +/analyze-must-gather [must-gather-path] +``` + +## Description + +The `analyze-must-gather` command performs comprehensive AI-powered analysis of OpenShift must-gather bundles to identify: +- Root cause analysis (RCA) of cluster issues +- Critical bugs and anomalies +- Cluster health assessment +- Component-specific problems +- Actionable fix recommendations + +This command leverages advanced pattern recognition and AI diagnostics to automatically detect common and complex issues in must-gather data, providing SRE teams with immediate insights and resolution paths. + +## Key Capabilities + +### Automated Analysis +- **Cluster Health Assessment**: Overall cluster status and health score +- **Anomaly Detection**: AI-powered identification of unusual patterns +- **Root Cause Analysis**: Deep dive into primary issues and their causes +- **Component Diagnostics**: Per-component health checks (API server, etcd, operators, nodes) +- **Log Analysis**: Automated parsing of component logs for errors and warnings +- **Configuration Review**: Validation of cluster configurations + +### Output Includes +- **Primary Issue**: Main problem affecting the cluster +- **Root Cause Summary**: Explanation of why the issue occurred +- **Evidence**: Supporting data from logs and configurations +- **Severity Level**: Critical, high, medium, or low +- **Immediate Actions**: Step-by-step remediation steps +- **Affected Components**: List of impacted cluster components +- **SRE Diagnostic Report**: Comprehensive report for incident management + +## How It Works + +### Step 1: Bundle Validation +- Verifies must-gather bundle path exists +- Checks for required directory structure +- Validates bundle completeness + +### Step 2: Data Extraction +- Scans cluster operator status +- Analyzes pod states across all namespaces +- Extracts node conditions and resource usage +- Parses component logs (API server, etcd, controller manager, scheduler) +- Reviews event logs for warnings and errors + +### Step 3: AI-Powered Analysis +- Pattern matching against known issue signatures +- Anomaly detection using statistical analysis +- Log correlation across multiple components +- Timeline reconstruction of events leading to issues +- Impact assessment on cluster functionality + +### Step 4: Root Cause Identification +- Identifies primary vs. secondary issues +- Traces issue propagation across components +- Determines root cause with supporting evidence +- Assesses severity and urgency + +### Step 5: Remediation Recommendations +- Provides immediate action items +- Suggests configuration changes +- Recommends oc commands for fixes +- Links to relevant documentation +- Escalation criteria for Red Hat support + +## Arguments + +- **$1** (must-gather-path): Path to the must-gather bundle directory + - Can be absolute path: `/path/to/must-gather.local.123456` + - Can be relative path: `./must-gather.local.123456` + - Can be compressed: `must-gather.tar.gz` (will be extracted) + +## Usage Examples + +### Example 1: Basic Analysis +``` +/analyze-must-gather /path/to/must-gather.local.123456 +``` + +**Expected Output:** +``` +🔍 Analyzing must-gather bundle... + +📊 Cluster Health Assessment: +Status: DEGRADED +Critical Issues: 3 +Warnings: 7 +Overall Health Score: 62/100 + +🚨 Primary Issue: +etcd cluster has lost quorum - 2 of 3 members unreachable + +🔎 Root Cause Analysis: +Network connectivity issues between control plane nodes +caused etcd members to become isolated, leading to quorum loss. + +📋 Evidence: +- etcd logs show "connection refused" errors +- Network policy changes detected 2 hours before issue +- Node-to-node connectivity test failures + +⚠️ Severity: CRITICAL + +✅ Immediate Actions: +1. Check network connectivity between control plane nodes +2. Review recent network policy changes +3. Verify firewall rules for etcd ports (2379, 2380) +4. Restart affected etcd members if connectivity restored + +📦 Affected Components: +- etcd (critical) +- kube-apiserver (degraded) +- All cluster operators (waiting for API) +``` + +### Example 2: Detailed Analysis with Interactive Q&A +``` +/analyze-must-gather ./must-gather.local.654321 + +User: What's causing the API server pods to crash? + +AI: Based on the must-gather analysis: + +🔍 API Server Crash Root Cause: +- OOM (Out of Memory) kills detected in API server pods +- Memory usage reached 8GB limit before crashes +- High request rate from cluster operators (500+ req/sec) + +Evidence from logs: +- kernel: Out of memory: Killed process 1234 (kube-apiserver) +- API server was handling 100K+ watch requests +- Etcd database size: 12GB (very large) + +Recommended Actions: +1. Increase API server memory limit to 16GB +2. Review and reduce watch requests from operators +3. Perform etcd defragmentation to reduce DB size +4. Consider scaling API server replicas + +Would you like me to generate the exact oc commands to apply these fixes? +``` + +### Example 3: Focus on Specific Component +``` +/analyze-must-gather /tmp/mg --component=operators + +AI: Analyzing cluster operators... + +📊 Cluster Operator Status: +Total: 32 +Available: 28 +Degraded: 3 +Unavailable: 1 + +🚨 Degraded Operators: +1. authentication (degraded for 2h15m) + - Reason: OAuth pods not ready + - Error: Image pull backoff for oauth-openshift:v4.14 + +2. network (degraded for 45m) + - Reason: OVN pods restarting frequently + - Error: CNI config validation failed + +3. monitoring (degraded for 1h30m) + - Reason: Prometheus PVC full + - Error: No space left on device + +❌ Unavailable Operators: +1. console (unavailable for 3h) + - Reason: Console route not accessible + - Error: 502 Bad Gateway from router pods + +✅ Recommendations: +1. Fix image pull issue for authentication operator +2. Validate and correct CNI configuration for network +3. Expand PVC or clean up Prometheus data for monitoring +4. Check router pod logs and route configuration for console +``` + +## Analysis Categories + +### 1. Cluster Operators +- Operator status (Available, Degraded, Progressing, Unavailable) +- Operator version mismatches +- Operator logs with errors +- Operator reconciliation failures + +### 2. Control Plane Components +- **API Server**: Crash loops, high latency, authentication issues +- **etcd**: Quorum status, database size, corruption, performance +- **Controller Manager**: Resource quota issues, failing controllers +- **Scheduler**: Pod scheduling failures, node affinity problems + +### 3. Node Health +- Node status (Ready, NotReady, Unknown) +- Disk pressure, memory pressure, PID pressure +- Kubelet issues and restarts +- Container runtime problems + +### 4. Pod and Container Issues +- CrashLoopBackOff pods +- ImagePullBackOff errors +- OOMKilled containers +- Pending pods and scheduling failures + +### 5. Network Problems +- CNI failures +- Service connectivity issues +- DNS resolution failures +- Network policy misconfigurations + +### 6. Storage Issues +- PVC binding failures +- Storage class problems +- Volume mount errors +- Disk space exhaustion + +### 7. Authentication & Authorization +- OAuth configuration errors +- Certificate expiration +- RBAC permission denials +- Identity provider failures + +## Common Issues Detected + +### Critical Issues +1. **etcd Quorum Loss**: Detected via etcd member status +2. **API Server Down**: No running API server pods +3. **Certificate Expiration**: Certs expired or expiring soon +4. **Node Failure**: Multiple nodes NotReady +5. **Storage Full**: etcd or node disk at capacity + +### High Priority Issues +1. **Operator Degradation**: Critical operators degraded +2. **Pod Crash Loops**: Control plane pods restarting +3. **Network Partitioning**: Node-to-node connectivity lost +4. **Resource Exhaustion**: CPU/Memory limits reached +5. **Database Corruption**: etcd consistency errors + +### Medium Priority Issues +1. **Performance Degradation**: Slow API responses +2. **Log Volume**: Excessive logging causing disk pressure +3. **Image Pull Issues**: Registry connectivity problems +4. **Configuration Drift**: Unexpected config changes +5. **Version Skew**: Component version mismatches + +## Interactive Capabilities + +After initial analysis, you can ask follow-up questions: + +### Root Cause Questions +``` +"What caused the etcd failure?" +"Why are pods failing to schedule?" +"What's the root cause of the network issue?" +``` + +### Component-Specific Questions +``` +"Tell me about the API server status" +"What's wrong with etcd?" +"Show me operator issues" +"Are there any node problems?" +``` + +### Fix Recommendations +``` +"How do I fix this?" +"What should I do first?" +"Give me the oc commands to resolve this" +"What's the priority order of fixes?" +``` + +### Timeline Questions +``` +"When did this issue start?" +"What changed before the failure?" +"Show me the event timeline" +``` + +## Output Format + +### Standard Analysis Output +``` +🔍 Must-Gather Analysis Report +===================================== + +📅 Cluster Information: +- Version: 4.14.3 +- Infrastructure: AWS +- Install Date: 2024-01-15 +- Must-Gather Timestamp: 2024-10-17 14:30 UTC + +📊 Cluster Health: +Overall Status: [HEALTHY|DEGRADED|CRITICAL] +Health Score: XX/100 +Critical Issues: X +High Priority: X +Warnings: X + +🚨 Primary Issue: +[Description of main problem] + +🔎 Root Cause Analysis: +[Detailed explanation of root cause] + +📋 Evidence: +- [Supporting evidence from logs] +- [Configuration issues found] +- [Timeline of events] + +⚠️ Severity: [CRITICAL|HIGH|MEDIUM|LOW] +Impact: [Description of impact] + +🔧 Immediate Actions: +1. [Action 1] +2. [Action 2] +3. [Action 3] + +📦 Affected Components: +- [Component 1]: [Status/Impact] +- [Component 2]: [Status/Impact] + +🔗 Related Issues: +- [Secondary issue 1] +- [Secondary issue 2] + +📚 References: +- [Link to docs] +- [KB articles] +- [Bug tracker references] +``` + +### SRE Diagnostic Report +``` +📋 SRE DIAGNOSTIC REPORT +===================================== + +INCIDENT SUMMARY: +- Incident ID: [Auto-generated] +- Severity: [Level] +- Start Time: [Timestamp] +- Detection: [How detected] + +IMPACT ASSESSMENT: +- User Impact: [Description] +- Affected Services: [List] +- Estimated Affected Users: [Count/Percentage] + +ROOT CAUSE: +[Detailed technical root cause] + +CONTRIBUTING FACTORS: +- [Factor 1] +- [Factor 2] + +RESOLUTION STEPS: +1. [Immediate fix] +2. [Short-term fix] +3. [Long-term prevention] + +PREVENTION MEASURES: +- [Recommendation 1] +- [Recommendation 2] + +ESCALATION PATH: +[When to escalate to Red Hat support] +``` + +## Best Practices + +### Before Running Analysis +1. **Collect fresh must-gather**: `oc adm must-gather` +2. **Ensure complete bundle**: Verify all expected files present +3. **Note symptoms**: Document observed issues before analysis + +### During Analysis +1. **Review full output**: Don't skip sections +2. **Ask clarifying questions**: Use interactive Q&A +3. **Cross-reference**: Compare with cluster monitoring + +### After Analysis +1. **Prioritize actions**: Follow recommended priority order +2. **Test fixes incrementally**: Apply one fix at a time +3. **Collect new must-gather**: After fixes for comparison +4. **Document findings**: Update incident reports + +## Prerequisites + +- Must-gather bundle collected from cluster +- Bundle extracted to local filesystem +- Read access to bundle directory +- (Optional) Network access to cluster for live validation + +## Tips for Best Results + +### Comprehensive Must-Gather +Collect with all required components: +```bash +oc adm must-gather \ + --image=registry.redhat.io/openshift4/ose-must-gather:latest \ + --dest-dir=/tmp/must-gather +``` + +### Include Additional Data +For specific issues, gather extra data: +```bash +# For network issues +oc adm must-gather -- /usr/bin/gather_network_logs + +# For storage issues +oc adm must-gather -- /usr/bin/gather_storage_logs +``` + +### Specify Component Focus +If you know the problem area: +``` +/analyze-must-gather /path/to/mg --focus=etcd +/analyze-must-gather /path/to/mg --focus=networking +/analyze-must-gather /path/to/mg --focus=operators +``` + +## Common Use Cases + +### 1. Post-Incident Analysis +``` +/analyze-must-gather ./incident-20241017/must-gather +"What caused the outage?" +"Show me the event timeline" +"What should we change to prevent this?" +``` + +### 2. Proactive Health Check +``` +/analyze-must-gather ./weekly-mg +"Are there any potential issues?" +"Show me warnings that could become critical" +"What's the cluster health trend?" +``` + +### 3. Upgrade Validation +``` +/analyze-must-gather ./pre-upgrade-mg +/analyze-must-gather ./post-upgrade-mg +"Compare the two must-gathers" +"What changed after the upgrade?" +``` + +### 4. Performance Investigation +``` +/analyze-must-gather ./perf-issue-mg +"Why is the API slow?" +"Show me resource usage patterns" +"Are there any bottlenecks?" +``` + +## Advanced Features + +### Automated Bug Detection +- Matches issues against known bug database +- Suggests relevant BZ (Bugzilla) references +- Identifies if issue has upstream fix + +### Trend Analysis +- Compares multiple must-gathers over time +- Identifies degradation patterns +- Predicts potential failures + +### Compliance Checking +- Validates best practice configurations +- Checks security postures +- Identifies deviations from standards + +--- + +**Ready to analyze must-gather bundles!** Provide the path to your must-gather directory and get instant AI-powered diagnostics, root cause analysis, and actionable fix recommendations. diff --git a/plugins/openshift-qe/commands/execute-test-case.md b/plugins/openshift-qe/commands/execute-test-case.md new file mode 100644 index 0000000..e405934 --- /dev/null +++ b/plugins/openshift-qe/commands/execute-test-case.md @@ -0,0 +1,306 @@ +--- +description: Execute OpenShift test cases step-by-step with detailed reporting and real-time progress +--- + +# Execute OpenShift Test Case + +You are an expert OpenShift QE engineer specializing in test execution. Your task is to execute OpenShift test cases step-by-step and provide detailed results. + +## Instructions + +Gather the necessary information from the user to execute their OpenShift test case. This typically includes: + +- **Feature**: What feature is being tested? (e.g., "Pod Deployment", "Event TTL Configuration") +- **Component**: Which OpenShift component? (e.g., "pod", "kube-apiserver", "oauth") +- **Scenario**: Test scenario description (e.g., "Deploy nginx pod and verify readiness") +- **Namespace** (optional): Target namespace (defaults to "test") +- **Configuration** (optional): Custom paths, timeouts, etc. + +Validate the environment before execution: +- OpenShift cluster accessibility +- Required permissions +- oc CLI availability +- Namespace existence or creation capability + +Execute the test with appropriate monitoring and capture: +- Real-time progress tracking +- Command execution with output capture +- Error handling and reporting +- Performance metrics + +Present comprehensive results including: +- Overall execution status +- Detailed step-by-step results +- Failure analysis if applicable +- Recommendations for next steps + +## Example Interaction + +**User**: Execute a test for pod deployment + +**Assistant**: I'll execute a pod deployment test. Let me gather the details: + +**Required Information:** +- Feature: Pod Deployment +- Component: pod +- Scenario: Deploy nginx pod and verify readiness +- Namespace: test (default) + +**Optional Configuration:** +- OC CLI Path: /usr/local/bin/oc (default: system PATH) +- Kubeconfig: ~/.kube/config (default) +- Timeout per step: 300 seconds (default) + +Would you like to customize any of these settings, or should I proceed with defaults? + +**User**: Use defaults + +**Assistant**: [Executes test and provides detailed results] + +Execution completed! Here are the results: + +**Overall Status**: ✅ PASSED + +**Summary**: +- Total Steps: 5 +- Passed: 5 +- Failed: 0 +- Duration: 45.2 seconds + +**Step Details**: +[Detailed output for each step with commands, outputs, and status] + +## Execution Workflow + +### Step 1: Pre-Flight Checks +- Verify oc CLI is installed +- Check cluster connectivity +- Validate user permissions +- Confirm namespace access + +### Step 2: Test Execution +- Create test namespace (if needed) +- Execute test steps sequentially +- Capture command outputs +- Monitor for errors +- Track execution time + +### Step 3: Results Collection +- Gather all step results +- Calculate success/failure rates +- Identify failure points +- Collect relevant logs + +### Step 4: Post-Execution Cleanup +- Clean up test resources (if specified) +- Delete test namespace (optional) +- Restore original state + +## Configuration Options + +### OC CLI Path +Specify custom oc binary location: +- Default: Uses system PATH +- Custom: `/usr/local/bin/oc` +- Custom: `/opt/homebrew/bin/oc` + +### Kubeconfig Path +Specify custom kubeconfig: +- Default: `~/.kube/config` +- Custom: `/path/to/custom/kubeconfig` +- Environment: Uses `$KUBECONFIG` if set + +### Timeout per Step +Maximum wait time for each step: +- Default: 300 seconds (5 minutes) +- Short tests: 60 seconds +- Long tests: 600 seconds (10 minutes) + +## Understanding Results + +### Overall Status +- **PASSED** ✅: All steps completed successfully +- **FAILED** ❌: One or more steps failed +- **PARTIAL** ⚠️: Some steps passed, others failed + +### Step Status +Each step shows: +- **Step Number**: Sequential order +- **Step Name**: What the step does +- **Duration**: How long it took +- **Status**: passed/failed +- **Commands**: oc CLI commands executed +- **Output**: Command stdout +- **Errors**: Command stderr (if any) +- **Exit Code**: Command exit code (0 = success) + +### Common Step Types + +1. **Setup Steps** + - Create namespace + - Set up test prerequisites + - Configure resources + +2. **Execution Steps** + - Deploy resources + - Apply configurations + - Trigger actions + +3. **Validation Steps** + - Check resource status + - Verify configurations + - Validate outputs + +4. **Cleanup Steps** + - Delete test resources + - Remove namespaces + - Restore state + +## Troubleshooting Failed Tests + +If a test fails, check: + +### 1. Cluster Connectivity +```bash +oc whoami +oc cluster-info +``` + +### 2. Permissions +```bash +oc auth can-i create pods +oc auth can-i create namespace +``` + +### 3. Resource Availability +```bash +oc get nodes +oc describe node +``` + +### 4. Namespace Issues +```bash +oc get namespace +oc describe namespace +``` + +### 5. Pod Issues +```bash +oc get pods -n +oc describe pod -n +oc logs -n +``` + +## Next Steps After Execution + +### If Test PASSED ✅ +1. Review execution logs for any warnings +2. Verify expected behavior manually +3. Run additional related tests +4. Document successful validation +5. Consider running with different parameters + +### If Test FAILED ❌ +1. **Identify Failure Point** + - Check which step failed + - Review error messages + - Examine command outputs + +2. **Debug with `/debug-test-failure`** + - Use the debug slash command + - Get AI-powered failure analysis + - Receive fix recommendations + +3. **Manual Investigation** + - Run failed commands manually + - Check cluster logs + - Verify resource states + +4. **Fix and Retry** + - Apply suggested fixes + - Re-run the test + - Validate resolution + +## Example Execution Output + +``` +🚀 Starting Test Execution + +Feature: Pod Deployment +Component: pod +Namespace: test-pod-12345 + +Pre-Flight Checks: +✅ oc CLI found: /usr/local/bin/oc +✅ Cluster accessible: https://api.cluster.example.com:6443 +✅ User authenticated: system:admin + +Step 1: Create Test Namespace + Command: oc create namespace test-pod-12345 + Duration: 1.2s + Status: ✅ PASSED + Output: namespace/test-pod-12345 created + +Step 2: Deploy nginx Pod + Command: oc run nginx --image=nginx:latest -n test-pod-12345 + Duration: 3.5s + Status: ✅ PASSED + Output: pod/nginx created + +Step 3: Wait for Pod Ready + Command: oc wait --for=condition=Ready pod/nginx -n test-pod-12345 --timeout=60s + Duration: 15.3s + Status: ✅ PASSED + Output: pod/nginx condition met + +Step 4: Verify Pod Status + Command: oc get pod nginx -n test-pod-12345 -o jsonpath='{.status.phase}' + Duration: 0.8s + Status: ✅ PASSED + Output: Running + +Step 5: Cleanup - Delete Namespace + Command: oc delete namespace test-pod-12345 + Duration: 2.5s + Status: ✅ PASSED + Output: namespace "test-pod-12345" deleted + +📊 Execution Summary +Overall Status: ✅ PASSED +Total Steps: 5 +Passed: 5 +Failed: 0 +Total Duration: 23.3 seconds + +✅ All tests passed successfully! +``` + +## Integration with Other Commands + +### Generate → Execute → Debug Workflow + +1. **Generate Test** + ``` + /generate-test-case + Feature: Pod Deployment + Component: pod + Format: Shell + ``` + +2. **Execute Test** + ``` + /execute-test-case + Feature: Pod Deployment + Component: pod + Scenario: Deploy and verify nginx pod + ``` + +3. **Debug Failures** (if needed) + ``` + /debug-test-failure + [Paste execution results] + ``` + +--- + +**Ready to execute tests!** Ask the user for test details and run comprehensive OpenShift test execution with detailed reporting. diff --git a/plugins/openshift-qe/commands/generate-test-case.md b/plugins/openshift-qe/commands/generate-test-case.md new file mode 100644 index 0000000..0ab6f29 --- /dev/null +++ b/plugins/openshift-qe/commands/generate-test-case.md @@ -0,0 +1,125 @@ +--- +description: Generate comprehensive OpenShift test cases in YAML, Gherkin, Go, or Shell format +--- + +# Generate OpenShift Test Case + +You are an expert OpenShift QE engineer specializing in test case generation. Your task is to generate comprehensive test cases for OpenShift components in various formats for manual testing, documentation, and automation frameworks. + +## Instructions + +Gather the necessary information from the user to generate their OpenShift test case: + +- **Feature**: What feature needs to be tested? (e.g., "Event TTL Configuration", "Pod Security Admission") +- **Component**: Which OpenShift component? (e.g., "kube-apiserver", "oauth", "etcd", "pod") +- **Scenario**: Describe the test scenario (e.g., "Verify eventTTLMinutes configuration is applied correctly") +- **Test Format**: Which format? (YAML, Gherkin/BDD, Go/Ginkgo, or Shell Script) +- **Namespace** (optional): Target namespace for testing + +Generate a comprehensive test case based on the user's requirements. Choose the most appropriate approach for the requested format: + +**For structured test formats (YAML/Gherkin/Go):** +- Create well-structured test definitions with metadata, steps, validation, and cleanup +- Include proper test organization and clear validation points + +**For Shell Script format:** +- Generate detailed manual testing guides with: + - Prerequisites section + - Step-by-step manual test steps with oc CLI commands + - Expected outputs for each step + - Complete automation script in "Quick Verification Script" section + - Important Notes section + - Troubleshooting section with common issues + +Present the generated test case with: +- Format and component information +- Complete test content ready to use +- Usage instructions for execution +- Suggestions for validation or next steps + +## Example Interaction + +**User**: Generate a test case for Event TTL configuration in kube-apiserver + +**Assistant**: I'll generate a comprehensive test case for Event TTL configuration. Let me gather the details: + +- Feature: Event TTL Configuration +- Component: kube-apiserver +- Scenario: Verify eventTTLMinutes configuration is applied and events are cleaned up + +Which format would you like? +1. **YAML** - Structured test definition +2. **Gherkin** - BDD-style Given/When/Then +3. **Go/Ginkgo** - Go test code +4. **Shell Script** - Manual testing guide with oc CLI commands + +**User**: Shell Script + +**Assistant**: [Generates comprehensive manual testing guide with 10 steps, automation script, troubleshooting, etc.] + +## Test Case Generation Tips + +### YAML Format +- Structured and machine-readable +- Easy to integrate with automation frameworks +- Clear test steps and validation points + +### Gherkin/BDD Format +- Human-readable Given/When/Then format +- Great for collaboration with non-technical stakeholders +- Follows Behavior-Driven Development practices + +### Go/Ginkgo Format +- Production-ready Go test code +- Uses Ginkgo framework (OpenShift standard) +- Can be run directly with `go test` + +### Shell Script Format +- **Most comprehensive format** +- Detailed manual testing guide with step-by-step oc CLI commands +- Includes expected outputs after each step +- Contains complete automation script +- Troubleshooting section for common issues +- Perfect for manual testing and documentation + +## Available OpenShift Components + +Common components you can test: +- `kube-apiserver` - Kubernetes API server +- `kube-controller-manager` - Controller manager +- `kube-scheduler` - Pod scheduler +- `oauth` - OAuth authentication +- `registry` - Container registry +- `ingress` - Ingress controller +- `etcd` - etcd cluster +- `pod` - Pod deployments +- `node` - Node management +- `network` - Network policies +- `storage` - Storage classes + +## Output Format + +After generating the test case, provide: + +1. **Summary** + - Feature being tested + - Component targeted + - Test format generated + +2. **Test Case Content** + - Complete test case code/script + - Well-formatted and ready to use + +3. **Usage Instructions** + - How to save the test case + - How to execute it + - Expected results + +4. **Next Steps** + - Suggest running the test with `/execute-test-case` + - Offer to generate additional test cases + - Provide validation tips + +--- + +**Ready to generate test cases!** Ask the user for their requirements and create comprehensive OpenShift test cases.