A comprehensive collection of enterprise AI applications and demonstrations for SUSE's cloud-native platform
This repository serves as a curated catalog of AI-related applications and demonstrations, showcasing deployment, security, observability, and management using SUSE's enterprise-grade cloud-native stack. Each application is packaged as a production-ready Helm chart, deployable via Rancher Apps & Marketplace, Helm CLI, or GitOps with Fleet.
- π― Repository Overview
- π AI Compare Application
- π Available Demonstrations
- π οΈ Quick Start
- π Repository Structure
- π§ Advanced Configuration
- π€ Contributing
This repository provides a curated collection of enterprise-ready AI applications and comprehensive demonstration materials organized into three main categories:
Production-ready Helm charts for AI workloads, each available in multiple variants:
- AI Compare - AI response comparison tool with security and observability demonstrations
ai-compare-suse: Enterprise SUSE BCI-based editionai-compare: Upstream community editionai-compare-opentelemetry: Advanced GenAI observability edition with token/cost tracking
- Ollama - Local LLM inference server with GPU acceleration
ollama-suse: SUSE enterprise editionollama-upstream: Upstream community editionollama-suse-direct: Direct NVIDIA GPU access variant
Key Application Features:
- Security Demonstrations: Built-in NeuVector DLP testing with dual data type transmission
- Enterprise Integration: OpenTelemetry observability, GPU acceleration, persistent storage
- Multi-Deployment Options: Rancher UI, Helm CLI, GitOps with Fleet
- Flexible Architecture: Direct model access or pipeline-enhanced processing
Guided demonstrations covering the complete AI application lifecycle:
- Infrastructure: GPU provisioning and management with Rancher
- Deployment: Multiple deployment methodologies for AI workloads
- Observability: AI-specific monitoring with SUSE Observability
- Security: Container security and runtime protection with NeuVector
- Zero-Trust: Network security and policy enforcement
Complete deployment automation and configurations:
- Helm Charts: Production-ready charts for all applications with SUSE and upstream variants
- GitOps: Fleet-based continuous deployment configurations
- Observability: Pre-configured monitoring and alerting
- Security: NeuVector policy automation and DLP configurations
| Chart Name | Version | Description | Variants |
|---|---|---|---|
ai-compare-suse |
0.1.x | Enterprise AI comparison app (SUSE BCI) | Production |
ai-compare |
0.1.x | AI comparison app (Upstream) | Community |
ai-compare-opentelemetry |
0.1.x | Enhanced GenAI observability edition | Advanced |
ollama-suse |
0.1.x | LLM inference server (SUSE) | Enterprise |
ollama-upstream |
0.1.x | LLM inference server (Upstream) | Community |
Adding New Applications: This repository is designed to grow as a catalog of AI applications. To add new applications, package them as Helm charts and submit via pull request following the existing chart structure.
The flagship AI Compare application provides real-time comparison between:
- π€ Direct Ollama: Local LLM inference (TinyLlama, Llama2, custom models)
- π Pipeline-Enhanced: Processed responses through Open WebUI pipelines with educational levels:
- πΆ Kid-friendly explanations
- π Student-level responses
- βοΈ Scientific detailed analysis
- Dual Data Transmission: Single button sends both credit card and SSN data
- Credit Card:
3412-1234-1234-2222 - Social Security Number:
123-45-6789 - NeuVector Integration: Triggers real-time DLP monitoring and alerting
- Clean Interface: Simple popup showing "
β οΈ Attempting to send sensitive data"
- External Connectivity: Tests connection to https://suse.com
- Network Policy Validation: Demonstrates network segmentation capabilities
- Security Monitoring: Validates outbound connection policies
- π Real-time Observability: OpenTelemetry integration with SUSE Observability
- π₯οΈ GPU Acceleration: NVIDIA GPU support with runtime configuration
- πΎ Persistent Storage: Model caching and configuration persistence
- π Automation: Background testing and response comparison
- π₯ Provider Monitoring: Live status of major AI providers (OpenAI, Anthropic, Google, etc.)
| Demo | Focus Area | Duration | Key Takeaways |
|---|---|---|---|
| Demo 1: Accelerating AI with Rancher and GPUs | Infrastructure | 15 min | GPU provisioning, cluster management, hardware optimization |
| Demo 2: Deploying the SUSE AI Stack | Deployment | 20 min | Rancher UI, Helm CLI, GitOps deployment methods |
| Demo 3: Monitoring AI with SUSE Observability | Observability | 15 min | GPU metrics, cost tracking, performance optimization |
| Demo 4: Building Trustworthy AI | Security | 20 min | Container scanning, vulnerability management, policy automation |
| Demo 5: Zero-Trust Security for AI | Network Security | 15 min | Runtime protection, network policies, threat detection |
| Demo | Trigger | Data Transmitted | NeuVector Detection |
|---|---|---|---|
| π Data Leak Demo | Single Button | Credit Card + SSN | Multi-pattern DLP alerts |
| π Availability Demo | Single Button | HTTPS Request | Network policy validation |
- Kubernetes cluster (RKE2 recommended)
- Helm 3.x
- kubectl configured
- Optional: GPU nodes with NVIDIA drivers
Add this repository to Rancher's Apps & Marketplace by creating a ClusterRepo resource:
- Navigate to Cluster β More Resources β Catalog (catalog.cattle.io) β ClusterRepos
- Click Create from YAML
- Apply the following configuration:
apiVersion: catalog.cattle.io/v1
kind: ClusterRepo
metadata:
name: ai-demos
spec:
url: https://wiredquill.github.io/ai-demos- Click Create - Charts will appear in Apps & Marketplace within 1-2 minutes
# Add the Helm repository
helm repo add ai-demos https://wiredquill.github.io/ai-demos
# Update repository index
helm repo update
# Search available charts
helm search repo ai-demos
# Install a chart
helm install my-release ai-demos/ai-compare-suse- Open Rancher cluster management interface
- Navigate to Apps & Marketplace
- Search for "AI Compare"
- Select variant:
ai-compare-suse: Enterprise SUSE editionai-compare: Upstream community edition
- Configure and deploy
# SUSE Enterprise Edition
helm install ai-demo charts/ai-compare-suse \
--set ollama.gpu.enabled=true \
--set aiCompare.observability.enabled=true
# Upstream Community Edition
helm install ai-demo charts/ai-compare# Deploy Fleet configuration
kubectl apply -f fleet/fleet.yaml
# Label target clusters
kubectl label cluster my-cluster needs-llm-suse=true# Port forward to access locally
kubectl port-forward svc/ai-demo-app-service 7860:7860
# Open browser to http://localhost:7860ai-demos/
βββ π± app/ # AI Compare application source
β βββ python-ollama-open-webui.py # Main Gradio application
β βββ Dockerfile.suse # SUSE BCI-based container
β βββ Dockerfile.upstream # Debian-based container
β βββ requirements.txt # Python dependencies
β βββ tests/ # Application test suite
βββ π¦ charts/ # Production Helm charts
β βββ ai-compare/ # Upstream community chart
β βββ ai-compare-suse/ # SUSE enterprise chart
βββ π Demo Guides/ # Step-by-step demonstrations
β βββ demo-1.md # GPU infrastructure with Rancher
β βββ demo-2.md # Multi-method deployment
β βββ demo-3.md # SUSE Observability monitoring
β βββ demo-4.md # Container security with NeuVector
β βββ demo-5.md # Zero-trust network security
βββ π fleet/ # GitOps deployment automation
β βββ fleet.yaml # Fleet configuration
β βββ gpu-operator/ # GPU operator automation
βββ π§ install/ # Infrastructure setup guides
β βββ README.md # Installation overview
β βββ Install-GPU-Operator.md # GPU infrastructure setup
β βββ Enable-SUSE-AI-Observability.md # Monitoring configuration
β βββ Install-NVIDIA-drivers.md # Driver installation guide
βββ π pipelines/ # AI pipeline configurations
β βββ response_level_pipeline.py # Educational response processing
β βββ pipeline_config.yaml # Pipeline configuration
βββ π docs/ # Technical documentation
β βββ OPENTELEMETRY-INTEGRATION.md # Observability setup
β βββ AUTOMATED-DEPLOYMENT.md # CI/CD configuration
β βββ AI-MODEL-CACHING.md # Model caching strategies
βββ πΌοΈ assets/ # Screenshots and documentation images
βββ π¨ scripts/ # Automation and utility scripts
ollama:
gpu:
enabled: true
hardware:
type: nvidiaaiCompare:
observability:
enabled: true
otlpEndpoint: "http://opentelemetry-collector.suse-observability.svc.cluster.local:4318"
collectGpuStats: trueneuvector:
enabled: true
dlpPolicies: true
securityDemos: trueaiCompare:
devMode:
enabled: true
persistence:
enabled: true
gitRepo: "https://github.com/your-org/ai-demos.git"This repository welcomes contributions of new AI applications, improved demonstrations, and enhanced documentation.
To add a new AI application to the repository:
-
Create Helm Chart Structure
# Create chart directory following naming convention mkdir -p charts/your-app-name-suse mkdir -p charts/your-app-name # for upstream variant
-
Package Your Application
- Follow existing chart patterns (see
charts/ai-compare-suseas reference) - Include both SUSE BCI and upstream variants when possible
- Add comprehensive values.yaml with documentation
- Include README.md explaining application purpose and configuration
- Follow existing chart patterns (see
-
Test Your Chart
# Lint the chart helm lint charts/your-app-name-suse # Test deployment helm install test-release charts/your-app-name-suse helm test test-release
-
Submit Pull Request
- Charts are automatically packaged and published to gh-pages branch via CI/CD
- Include demo documentation if applicable
- Update main README.md to list your application in the catalog table
- Fork the repository
- Create a feature branch:
git checkout -b feature/new-apporgit checkout -b feature/improved-demo - Make your changes: Add new applications, demos, or improvements
- Test thoroughly: Ensure all changes work in both SUSE and upstream environments
- Submit a pull request: Include clear description of changes and testing performed
# Setup local development
git clone https://github.com/wiredquill/ai-demos.git
cd ai-demos
# For application development
cd app
pip install -r requirements.txt
python python-ollama-open-webui.py
# For chart development
helm lint charts/your-chart-name
helm install test charts/your-chart-name --dry-run --debug
# Run tests
pytest tests/
helm test my-releaseCharts are automatically packaged and published via GitHub Actions:
- Commits to
maintrigger automatic chart packaging - Charts are published to gh-pages branch
- Rancher ClusterRepo and Helm repository automatically pick up updates
- Manual packaging: See
.github/workflows/for CI/CD pipeline details
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- π Documentation: Check the docs/ directory for detailed technical guides
- π οΈ Installation Help: Review install/README.md for setup assistance
- π Issues: Open an issue in this repository for bug reports or feature requests
- π¬ Discussions: Use GitHub Discussions for questions and community support
Powered by SUSE's Enterprise Cloud-Native AI Platform
Complete demonstrations of enterprise AI workloads from infrastructure provisioning to application security and observability.