Demo Kit for Partner SE Enablement.
Can watch this video that is the overall goal and how to execute this Partner Demo Kit.
This repository is your complete training environment for mastering Harness demonstrations and sales enablement. Designed for both potential partners evaluating Harness partnership opportunities and current partners seeking sales engineering enablement training, this kit provides everything needed to deliver compelling Harness.io demonstrations using local resources, without requiring complex cloud infrastructure or specialized environments.
Built from our Unscripted conference workshop materials, this hands-on training culminates in a customer pitch recording where you demonstrate your ability to sell Harness to prospective clients.
All demo resources are created in a dedicated Harness project (default: "Base Demo", customizable during setup) to keep demo activities segregated from other projects. Please note that if you using a partner licensed Harness instance, it will be important to name your project something other than the default as other colleagues may have already created a project named "Base Demo".
After completing this training, you will be able to:
✅ Independently execute a complete Harness demonstration ✅ Implement a Harness demo environment with version-controlled sample applications and pipelines ✅ Construct customer-specific demonstrations by mapping appropriate Harness features to identified pain points and creating relevant proof-of-concept environments
- Harness Skills: Hands-on experience with CI/CD, Code Repository, Continuous Verification, and Security Testing
- Harness Home Lab: Your own local demonstration environment ready for customer presentations
This training consists of four progressive sections:
- 3-Minute Guide to Partner Technical Sales Training - Get started quickly with automated setup
- Hands-On Lab: Navigate the {Unscripted} Demo Track - Complete the guided demo labs
- Infrastructure Setup: Building Your Harness Home Lab - Deep dive into manual configuration
- Final: Create and Submit Your Custom Demo Recording - Demonstrate your mastery
Next Steps: After completing the training, you'll have a fully functional Harness demo environment and the skills to customize demonstrations for customer engagements.
- Minimal Prerequisites: Runs on standard developer workstation using common tools
- Self-Contained: All necessary components included (Terraform configs, sample application code)
- Customizable: Use as a foundation for building customer-specific demonstrations
- Field-Tested: Based on materials from Harness Unscripted workshops
- Project Segregation: All resources created in a dedicated project (customizable name)
- Code Repository Secret Scanning - Block sensitive data from being committed
- CI Pipeline with Test Intelligence - Automated testing and Docker image builds (Harness Cloud)
- Continuous Deployment - Rolling and canary deployment strategies (local Kubernetes)
- Continuous Verification - Automated deployment validation using Prometheus metrics
- Security Testing - Available with licensed partner organization
- Policy Enforcement (OPA) - Available with licensed partner organization
The demo kit consists of:
- Harness Manager (SaaS) - CI/CD pipelines, connectors, and built-in features (Harness Cloud, Code Repository, Artifact Registry)
- Local Machine - Kubernetes cluster (Colima/Rancher/minikube/Docker Desktop) running the delegate, lab documentation, and demo applications
- Docker Hub - Container image storage with authentication via Harness Platform
- Local Browser - Access documentation at localhost:30001 and the demo app at localhost:8080
- CI Builds: Harness Cloud (requires credit card for account verification - free tier available)
- CD Deployments: Local Kubernetes (Colima for Apple Silicon, minikube/Docker Desktop/Rancher Desktop for others)
- Git: Version control
- Docker: Container runtime (Docker Desktop or Docker Engine)
- Kubernetes (platform-specific):
- macOS (Apple Silicon M1/M2/M3/M4): Colima (REQUIRED for AMD64 emulation via Rosetta 2)
Note: First startup takes 5-10 minutes. Harness Cloud builds AMD64 images, so AMD64 emulation is required. The
brew install colima docker kubectl qemu lima-additional-guestagents colima start --vm-type=vz --vz-rosetta --arch x86_64 --cpu 4 --memory 8 --kubernetes
start-demo.shscript will detect missing dependencies and offer to install them automatically. - macOS (Intel): Choose one - minikube, Colima, Docker Desktop, or Rancher Desktop
- Windows: minikube (recommended), Docker Desktop, or Rancher Desktop
- Linux: minikube or your preferred K8s distribution
- macOS (Apple Silicon M1/M2/M3/M4): Colima (REQUIRED for AMD64 emulation via Rosetta 2)
- Minimum Cluster Resources: 4 CPU cores and 8GB memory
- The
start-demo.shscript validates cluster resources and provides remediation guidance if insufficient
- The
- kubectl: Kubernetes CLI (usually included with above tools)
- Helm: Kubernetes package manager
- Terraform: Infrastructure as Code tool (v1.0+)
- Node.js & npm: For frontend application (Node 20+)
- Python: For backend application (Python 3.8+)
- Harness Account: Sign up at app.harness.io
- Enable modules: CI (Continuous Integration), CD (Continuous Delivery), Code Repository
- Important: Harness Cloud requires credit card verification (free tier available)
- Add credit card in Account Settings > Billing to enable Harness Cloud for CI builds
- Docker Hub Account: Sign up at hub.docker.com
- Create a repository named
harness-demo - Generate a Personal Access Token (Settings > Security > Personal Access Tokens)
- Create a repository named
| OS | Version | Architecture | Kubernetes Option | Notes |
|---|---|---|---|---|
| macOS | 12.0+ (Monterey) | Apple Silicon (M1/M2/M3/M4) | Colima (required) | Uses Rosetta 2 for AMD64 emulation |
| macOS | 12.0+ (Monterey) | Intel | Colima, minikube, Docker Desktop, Rancher Desktop | Any K8s option works |
| Linux | Ubuntu 20.04+, Debian 11+, Fedora 36+ | x86_64 | minikube, Docker Desktop, Rancher Desktop | Native AMD64, no emulation needed |
| Windows | 10 (Build 19041+), 11 | x86_64 | minikube, Docker Desktop, Rancher Desktop | Requires WSL2 or Git Bash for scripts |
| Resource | Minimum | Recommended | Notes |
|---|---|---|---|
| CPU | 4 cores | 6+ cores | Required for Kubernetes cluster + builds |
| RAM | 8 GB | 16 GB | K8s cluster needs 4-8GB allocation |
| Disk | 20 GB free | 40 GB free | Docker images + K8s storage |
🍎 Apple Silicon (M1/M2/M3/M4):
- Colima with Rosetta 2 is required for AMD64 emulation (Harness Cloud builds AMD64 images)
- Docker Desktop also works but Colima is recommended for better resource efficiency
- First Colima startup takes 5-10 minutes to download AMD64 base images
🐧 Linux:
- Any modern distribution with Docker support works
- minikube is the simplest option for most users
- Ensure your user is in the
dockergroup:sudo usermod -aG docker $USER
🪟 Windows:
- WSL2 or Git Bash required to run the setup scripts (bash)
- Docker Desktop with WSL2 backend recommended
- See "Windows Users - Important Setup Notes" below for detailed setup
The automation scripts (start-demo.sh and stop-demo.sh) are bash scripts that require a bash-compatible environment on Windows. You have two options:
Option 1: Git Bash (Recommended for Simplicity)
- Install Git for Windows which includes Git Bash
- Install Docker Desktop, kubectl, helm, and Terraform in Windows (not WSL)
- Run all commands in Git Bash terminal
- Docker Desktop's Kubernetes integration works seamlessly
- ✅ Pros: Simpler setup, native Windows Docker performance
⚠️ Note: Some Unix commands may have limited functionality
Option 2: WSL2 (Recommended for Advanced Users)
- Install Windows Subsystem for Linux 2 (WSL2)
- Install Docker Desktop for Windows with WSL2 integration enabled
- Install kubectl, helm, and Terraform inside WSL using Linux package managers
- Run all commands in WSL terminal (Ubuntu, Debian, etc.)
- ✅ Pros: Full Linux compatibility, better for complex workflows
⚠️ Networking: WSL2 localhost forwarding handles port access automatically⚠️ Paths: Use Linux paths (/home/user/) not Windows paths (C:\Users\)
Kubernetes Options for Windows:
- minikube (recommended by the script) - Works with both Git Bash and WSL2
- Docker Desktop - Enable Kubernetes in settings (Settings > Kubernetes > Enable Kubernetes)
- Rancher Desktop - Alternative to Docker Desktop with built-in Kubernetes
Verification: After choosing your approach, verify your bash environment works:
# In Git Bash or WSL terminal
bash --version # Should show bash version
docker --version # Should show Docker version
kubectl version # Should show kubectl version# Clone this repository
git clone https://github.com/harness-community/partner-demo-kit.git
cd partner-demo-kitRecommended Location: Save in an easily accessible location like:
~/projects/partner-demo-kit~/Documents/partner-demo-kit
For a faster setup experience, use the provided automation scripts:
# Make scripts executable (first time only)
chmod +x start-demo.sh stop-demo.sh
# Start all local infrastructure
./start-demo.shOnce the startup script completes, access the demo at these URLs:
| Service | URL | Description |
|---|---|---|
| Lab Documentation | http://localhost:30001 | Interactive lab guides for the demo walkthrough |
| Demo Application | http://localhost:8080 | Frontend web application (after deployment) |
| Harness UI | https://app.harness.io | Harness platform - select your demo project |
Recommended Setup: Use Chrome's split tab view (or two browser windows side-by-side) with:
- Left side: Harness UI at https://app.harness.io
- Right side: Lab documentation at http://localhost:30001
This allows you to follow the lab instructions while working in the Harness platform without switching tabs.
Note for minikube users: Run
minikube tunnelin a separate terminal to access services at localhost.
The start-demo.sh script automates the complete demo setup from local infrastructure to Harness resources:
1. Prerequisites Check
- Verifies Docker, kubectl, Terraform, and other required tools are installed
- Checks that Docker daemon is running
2. Platform & Kubernetes Detection
- Detects your operating system and architecture (macOS/Windows/Linux, ARM64/AMD64)
- Validates Kubernetes tool based on platform:
- Apple Silicon Macs: Requires Colima with AMD64 emulation
- Windows: Recommends minikube (allows Docker Desktop/Rancher Desktop)
- Other platforms: Flexible (minikube, Colima, Docker Desktop, Rancher Desktop)
- Automatically starts Colima/minikube if needed
- Verifies cluster architecture (ensures AMD64 for Apple Silicon)
- Validates cluster resources (minimum 4 CPU cores, 8GB memory) with remediation guidance
3. Prometheus Deployment (Background)
- Creates monitoring namespace if it doesn't exist
- Deploys Prometheus in background (non-blocking - runs while Docker builds)
- Verifies Prometheus status at end of script
4. Docker Hub Authentication (Smart Detection)
- If already logged in (via Docker Desktop): Uses existing credentials automatically
- If not logged in: Checks for saved username in these locations (in order):
- Local
.demo-configfile (from previous runs) kit/se-parms.tfvars(Terraform configuration)- Interactive prompt (if not found)
- Local
- Saves your username to
.demo-configfor future runs - Prompts for login with helpful instructions about using a Personal Access Token (PAT)
5. Docker Image Builds (Parallel!)
- Builds all three images simultaneously (backend, test, docs) in parallel
- Progress tracking shows status of each build in real-time
- Pushes to your Docker Hub repository
- Provides clear error messages if build or push fails
- Saves 2-4 minutes vs sequential builds
6. Harness Configuration & IaC Provisioning (Automated!)
- Smart credential collection: Reuses values from previous runs or prompts for:
- Harness Account ID (from URL when viewing your profile)
- Harness Personal Access Token (PAT)
- Docker Hub password/PAT (if not already logged in)
- Automatic configuration: Updates
kit/se-parms.tfvarswith your values - IaC execution: Runs Terraform init, plan, and apply automatically
- Idempotent: Skips if state file already exists
- Creates all Harness resources: Project, connectors, environments, services, monitored services, code repository, etc.
7. Status Display
- Shows cluster status, Prometheus deployment, and Terraform results
- Provides clear next steps based on what was configured
# Skip Docker image build (if you already have the backend image)
./start-demo.sh --skip-docker-build
# Skip Terraform/Harness setup (useful for infrastructure-only testing)
./start-demo.sh --skip-terraform
# Combine options
./start-demo.sh --skip-docker-build --skip-terraformFirst Run (Complete Setup):
- Prompts for:
- Project name (default: "Base Demo") - customizable name for your Harness project
- Docker Hub username (unless already logged in via Docker Desktop)
- Docker Hub password/PAT
- Harness Account ID
- Harness Personal Access Token (PAT)
- Validates project name doesn't use reserved words or conflict with existing projects
- Saves all credentials to
.demo-configfor future runs - Creates Harness resources via Terraform
- Takes ~6-10 minutes total (parallel Docker builds + IaC provisioning)
Subsequent Runs:
- Detects existing state file and skips Harness resource creation
- Reuses saved credentials from
.demo-config - Only prompts if saved credentials are missing or invalid
- Takes ~2-3 minutes for infrastructure verification
The script stores credentials in .demo-config (git-ignored) for convenience:
- Project name & identifier - Your custom Harness project name
- Docker Hub username - Reused for subsequent runs
- Harness Account ID - Saved to avoid re-entering
- Harness PAT - Cached for convenience (can also use
DEMO_BASE_PATenv var) - Docker Hub password/PAT - Saved for Terraform configuration
Security Notes:
.demo-configis automatically excluded from Git via.gitignore- Use Personal Access Tokens (PATs) instead of passwords when possible
- Docker Hub PAT: https://hub.docker.com/settings/security
- Harness PAT: Profile > My API Keys & Tokens
Using Docker Desktop:
- If you log in to Docker Hub through Docker Desktop, the script detects this and reuses your session
- You won't be prompted for Docker credentials during the build phase
# Interactive cleanup menu (RECOMMENDED)
./stop-demo.shInteractive Cleanup Menu:
When you run ./stop-demo.sh without arguments, you'll see a user-friendly menu with the following options:
-
Stop K8s deployments only (Recommended - preserves Harness resources)
- Removes frontend and backend deployments/services
- Keeps Harness project, Prometheus, and cluster running
- Easy to restart demo later with:
./start-demo.sh --skip-terraform
-
Stop K8s deployments + Delete Prometheus
- Same as option 1, but also removes Prometheus monitoring
-
Stop K8s deployments + Stop cluster
- Stops local deployments and shuts down Colima/minikube
- Preserves Harness resources for next time
-
Full cleanup (delete all Harness resources)
- Deletes your Harness demo project
- Deletes Docker Hub repository
- Removes Prometheus
- Keeps cluster running and config files
-
Complete cleanup (everything including cluster)
- Same as option 4, but also stops the cluster
- Option to delete Colima VM for fresh start (Apple Silicon)
-
Custom cleanup options
- Choose exactly what to cleanup
-
Exit without doing anything
Command-Line Flags (Skip Interactive Menu):
For automated/scripted use:
./stop-demo.sh --delete-prometheus- Also remove Prometheus monitoring./stop-demo.sh --stop-cluster- Also stop Kubernetes cluster (Colima or minikube)./stop-demo.sh --delete-harness-project- Delete your Harness demo project via API./stop-demo.sh --delete-docker-repo- Delete Docker Hub harness-demo repository via API./stop-demo.sh --delete-config-files- Delete .demo-config, se-parms.tfvars, and IaC state files./stop-demo.sh --full-cleanup- Complete cleanup (all of the above except config files)./stop-demo.sh --no-interactive- Skip menu, use minimal cleanup
Recommended Workflow:
After running the demo, use the default interactive menu (option 1) to preserve Harness resources:
./stop-demo.sh # Choose option 1 (default)To restart the demo later without recreating Harness resources:
./start-demo.sh --skip-terraformNext Steps: After running
start-demo.shsuccessfully:
- Navigate to app.harness.io and select your demo project
- Configure Harness Code Repository (see Step 8 in Manual Setup below)
- Follow the lab guides in the markdown/ directory
If you prefer manual control or need to troubleshoot, follow these detailed steps:
Option A: Rancher Desktop (Recommended)
- Download and install Rancher Desktop
- Open Rancher Desktop preferences
- Enable Kubernetes
- Wait for Kubernetes to start (green indicator)
- Services will be automatically accessible at
localhost
Option B: minikube
# Start minikube
minikube start
# Enable metrics-server addon
minikube addons enable metrics-server
# In a separate terminal, run minikube tunnel (required for service access)
# Keep this running during the demo
minikube tunnel# Navigate to the kit directory
cd kit
# Create monitoring namespace
kubectl create namespace monitoring
# Deploy Prometheus
kubectl -n monitoring apply -f ./prometheus.yml
# Verify Prometheus is running
kubectl get pods -n monitoringOptional - Expose Prometheus with ngrok (if Harness delegate can't reach cluster-local URL):
# Port forward Prometheus
kubectl port-forward -n monitoring svc/prometheus-k8s 9090:9090
# In another terminal, expose via ngrok
ngrok http 9090
# Copy the HTTPS URL (e.g., https://abc123.ngrok.io)
# You'll use this URL in the Terraform configuration later✅ Automated: The start-demo.sh script automatically detects your architecture (Intel/AMD vs Apple Silicon) and builds all images with the correct platform settings. You can skip this step if using the automated script.
⚠️ Manual Builds: If building manually, Harness Cloud runs on amd64 architecture. Apple Silicon users (M1/M2/M3/M4) must usedocker buildx build --platform linux/amd64.
# Navigate to backend directory
cd backend
# Build the Docker image
# Replace "dockerhubaccountid" with YOUR Docker Hub username
# For Intel/AMD Macs and PCs:
docker build -t dockerhubaccountid/harness-demo:backend-latest .
# For Apple Silicon Macs (M1/M2/M3/M4):
docker buildx build --platform linux/amd64 -t dockerhubaccountid/harness-demo:backend-latest --push .
# If not using buildx --push flag, login and push separately:
docker login -u dockerhubaccountid
docker push dockerhubaccountid/harness-demo:backend-latest# Navigate to python-tests directory
cd python-tests
# For Intel/AMD Macs and PCs:
docker build -t dockerhubaccountid/harness-demo:test-latest .
docker push dockerhubaccountid/harness-demo:test-latest
# For Apple Silicon Macs (M1/M2/M3/M4):
docker buildx build --platform linux/amd64 -t dockerhubaccountid/harness-demo:test-latest --push .Important Docker Image Tags:
backend-latest- Django backend application (production runtime)test-latest- Python + pytest environment (CI testing only)demo-base-<tag>- Frontend Angular application
Critical: Remember to replace dockerhubaccountid in:
- The Docker build/push commands above
- kit/main.tf line ~300:
imagePath: dockerhubaccountid/harness-demo - Your Harness pipeline's Test Intelligence step to use
test-latestimage
-
Log in to Harness: app.harness.io
-
Enable Required Modules:
- Navigate to Account Settings > Subscriptions
- Enable: CI, CD, and Code Repository
-
Install Harness Delegate:
- Go to Account Settings > Delegates
- Click "New Delegate"
- Select "Kubernetes" and follow the Helm installation instructions
- Example:
helm repo add harness-delegate https://app.harness.io/storage/harness-download/delegate-helm-chart/ helm upgrade -i helm-delegate harness-delegate/harness-delegate-ng \ --namespace harness-delegate-ng --create-namespace \ --set delegateName=helm-delegate \ --set accountId=YOUR_ACCOUNT_ID \ --set delegateToken=YOUR_DELEGATE_TOKEN
-
Get Your Harness Account ID:
- Click on your profile (top right)
- Your account ID is in the URL (e.g.,
VEuU4vZ6QmSJZcgvnccqYQ)
-
Create a Harness API Token:
- Go to your profile > My API Keys & Tokens
- Create a new token with appropriate permissions
- Save this token securely
Note: The automated
start-demo.shscript handles this step automatically. Only follow these manual steps if you skipped the automated setup or used--skip-terraform.
# Navigate to kit directory
cd ../kit
# Edit se-parms.tfvars
# Replace the placeholder values with your actual values:se-parms.tfvars:
account_id = "your-harness-account-id"
docker_username = "your-dockerhub-username"
DOCKER_PAT = "your-dockerhub-pat"
project_name = "Base Demo"
project_identifier = "Base_Demo"Important: Also update dockerhubaccountid in kit/main.tf (line ~300) with your Docker Hub username.
Note: The automated
start-demo.shscript handles this step automatically. Only follow these manual steps if you skipped the automated setup or used--skip-terraform.
# Set your Harness API token as an environment variable (Mac/Linux)
export DEMO_BASE_PAT="pat.your-actual-token-here"
# Verify it's set
echo $DEMO_BASE_PAT
# Initialize Terraform
terraform init
# Preview the changes
terraform plan -var="pat=$DEMO_BASE_PAT" -var-file="se-parms.tfvars" -out=plan.tfplan
# Apply the configuration
terraform apply -auto-approve plan.tfplanWhat Gets Created (all in your demo project):
- Harness project (your custom name, default: "Base Demo")
- Kubernetes connector (workshop_k8s)
- Docker Hub connector (workshopdocker)
- Prometheus connector
- Docker credentials (secrets)
- "Compile Application" template
- Dev and Prod environments
- K8s Dev infrastructure
- Backend service
- Monitored services for continuous verification
- Code repository (partner_demo_kit) mirrored from GitHub
- Navigate to Harness UI > Code Repository module
- Select your demo project
- Click on "partner_demo_kit" repository
- Click "Clone" (top right) > "+Generate Clone Credential"
- Save the generated username and token
- Enable Secret Scanning:
- Go to Manage Repository > Security
- Turn on "Secret Scanning"
- Save
Follow the step-by-step lab guides in the markdown/ directory which walk through:
- Secret Scanning Demo: Try to push a secret and see it blocked
- Build Pipeline: Create CI pipeline with test intelligence
- Frontend Deployment: Deploy frontend with rolling strategy
- Backend Deployment: Deploy backend with canary strategy
- Continuous Verification: Verify deployments using Prometheus metrics
Access the Demo Application:
- Rancher Desktop: http://localhost:8080 (automatic)
- minikube: http://localhost:8080 (requires
minikube tunnelrunning)
.
├── README.md # Complete setup and demo guide
├── CLAUDE.md # Instructions for Claude Code AI assistant
├── start-demo.sh # Automated startup script for local infrastructure
├── stop-demo.sh # Automated shutdown script for cleanup
├── kit/ # Terraform Infrastructure as Code
│ ├── main.tf # Main IaC configuration
│ ├── se-parms.tfvars # Your configuration variables
│ └── prometheus.yml # Prometheus deployment
├── backend/ # Django backend application
│ ├── Dockerfile
│ └── requirements.txt
├── frontend-app/ # Angular frontend application
│ └── harness-webapp/
│ ├── Dockerfile
│ └── package.json
├── harness-deploy/ # Kubernetes manifests
│ ├── backend/ # Backend K8s resources
│ └── frontend/ # Frontend K8s resources
├── python-tests/ # Test suites for CI demo
└── markdown/ # Step-by-step lab guides (0-7)
├── 0-login.md # Getting started and verification
├── 1-coderepo.md # Secret scanning demo
├── 2-build.md # CI pipeline setup
├── 3-cd-frontend.md # Frontend deployment
├── 4-cd-backend.md # Backend canary deployment
├── 5-security.md # Security testing (licensed only)
├── 6-cv.md # Continuous verification
└── 7-opa.md # OPA policy enforcement (licensed only)
Issue: Terraform fails with authentication error
- Solution: Verify
DEMO_BASE_PATenvironment variable is set correctly:echo $DEMO_BASE_PAT
Issue: Terraform not found
- Solution: Install Terraform from https://www.terraform.io/downloads
Issue: Services not accessible at localhost:8080
- Solution (Colima): Services should be automatically accessible. If not, check that Colima is running with
colima status - Solution (minikube): Ensure
minikube tunnelis running in a separate terminal - Solution (Rancher Desktop): Check that Kubernetes is enabled in preferences
Issue: Colima fails to start (Apple Silicon)
- Cause: Missing dependencies (qemu, lima-additional-guestagents)
- Solution: Install all required dependencies and start fresh:
brew install colima docker kubectl qemu lima-additional-guestagents colima stop colima delete colima start --vm-type=vz --vz-rosetta --arch x86_64 --cpu 4 --memory 8 --kubernetes
Issue: Prometheus connector fails in Harness
- Solution: Use ngrok to expose Prometheus and update the connector URL to the ngrok HTTPS URL
Issue: Docker image push fails
- Solution: Verify you're logged in to Docker Hub:
docker login -u your-username
Issue: Image pull error: "pull access denied for harness-demo"
- Cause: The placeholder
dockerhubaccountidwas not replaced with your actual Docker Hub username - Solution:
- Update kit/main.tf line ~300 to use your Docker Hub username:
imagePath: YOUR-USERNAME/harness-demo - In Harness UI, verify the service artifact configuration shows
YOUR-USERNAME/harness-demo:backend-latest - Re-run the deployment pipeline
- Update kit/main.tf line ~300 to use your Docker Hub username:
Issue: Test Intelligence step fails with "pytest: not found"
- Cause: The Test Intelligence step is not using the correct container image
- Solution:
- Build and push the test image (see architecture notes above for Apple Silicon):
cd python-tests # Apple Silicon: docker buildx build --platform linux/amd64 -t YOUR-USERNAME/harness-demo:test-latest --push . # Intel/AMD: docker build -t YOUR-USERNAME/harness-demo:test-latest . && docker push YOUR-USERNAME/harness-demo:test-latest
- In Harness pipeline, update Test Intelligence step to use image:
YOUR-USERNAME/harness-demo:test-latest - Do NOT use
backend-latestfor testing - usetest-latest
- Build and push the test image (see architecture notes above for Apple Silicon):
Issue: Test Intelligence fails with "exec /usr/bin/sh: exec format error"
- Cause: Docker image was built for wrong architecture (ARM64 instead of amd64)
- Affects: Apple Silicon Macs building images for Harness Cloud
- Solution: Rebuild the image with
--platform linux/amd64:cd python-tests docker buildx build --platform linux/amd64 -t YOUR-USERNAME/harness-demo:test-latest --push .
Issue: Pipeline setup or build infrastructure questions
- Solution: The demo uses Harness Cloud for CI builds (test and compile steps)
- Requires: Harness account with credit card verification (free tier available)
- In pipeline infrastructure, select:
- Platform: "Harness Cloud"
- OS: "Linux"
- Architecture: "Amd64"
Issue: Harness delegate not connecting
- Solution: Check delegate pod status:
kubectl get pods -n harness-delegate-ng
# Check Kubernetes is running
kubectl cluster-info
# Check Prometheus is deployed
kubectl get pods -n monitoring
# Check deployments (after running demo)
kubectl get pods -A | grep deployment
kubectl get services -A | grep svc
# Check Harness delegate
kubectl get pods -n harness-delegate-ngTo start fresh and reset everything, you have several options:
# Clean up EVERYTHING (Harness project, Docker repo, local files, K8s resources)
./stop-demo.sh --full-cleanupThis single command will:
- Delete your Harness demo project via API (with confirmation prompt)
- Delete the Docker Hub
harness-demorepository via API (with confirmation prompt) - Delete configuration files (.demo-config, se-parms.tfvars, state files)
- Remove Kubernetes deployments (frontend/backend)
- Remove Prometheus monitoring
- Stop Kubernetes cluster (minikube only)
Choose specific cleanup operations:
# Clean up only cloud resources (Harness + Docker Hub)
./stop-demo.sh --delete-harness-project --delete-docker-repo
# Clean up cloud resources and local config (keeps K8s running)
./stop-demo.sh --delete-harness-project --delete-docker-repo --delete-config-files
# Clean up only local resources (keeps Harness project)
./stop-demo.sh --delete-prometheus --stop-cluster --delete-config-filesStep 1: Clean Kubernetes Resources
# Delete deployed applications
kubectl delete deployment frontend-deployment --ignore-not-found=true
kubectl delete service web-frontend-svc --ignore-not-found=true
kubectl delete deployment backend-deployment --ignore-not-found=true
kubectl delete service web-backend-svc --ignore-not-found=true
# Delete Prometheus (optional)
kubectl delete -f kit/prometheus.yml -n monitoring --ignore-not-found=true
kubectl delete namespace monitoring --ignore-not-found=trueStep 2: Delete Harness Resources
Important: Delete Harness resources through the UI before running
terraform destroy. This ensures proper cleanup of all dependencies.
- Navigate to Harness UI > Code Repository > Manage Repository
- Delete "partner_demo_kit" repository
- Navigate to Projects
- Delete your demo project (this removes all project resources)
Step 3: Clean IaC State
After deleting Harness resources through the UI:
cd kit
# Option A: Destroy using Terraform (may have some errors - safe to ignore)
terraform destroy -var="pat=$DEMO_BASE_PAT" -var-file="se-parms.tfvars"
# Option B: Clean slate - remove all state files
git clean -dxf # WARNING: Removes all untracked files including .tfstate filesNote: The destroy command may show errors for resources already deleted through the Harness UI. This is expected and safe to ignore. The cleanup script handles this automatically.
Step 4: Clean Docker Hub (Optional)
- Navigate to Docker Hub
- Delete the "harness-demo" repository
Step 5: Stop Kubernetes (Optional)
# For minikube
minikube stop
# For Rancher Desktop - stop through the UIOnce you've successfully completed the demo labs and built your Harness Home Lab, you're ready to:
-
Create Your Custom Demo Recording
This is your final assessment - a customer pitch recording that demonstrates your mastery of Harness sales enablement. You'll create a 10-15 minute recording where you pitch Harness to a pretend client.
Recording Requirements:
- Choose a specific use case or industry vertical
- Customize the demo to address relevant pain points for that vertical
- Deliver a compelling sales pitch (not just a demo walkthrough)
- Submit to your Harness Partner Manager for evaluation
Evaluation Rubric:
Your pitch will be evaluated on five key criteria:
1. Understanding of the Module
- Accurate description of Harness features and benefits
- Clear explanation of how Harness addresses customer pain points
- Technical accuracy and depth of knowledge
2. Articulation of Value Proposition
- Effective communication of Harness's unique differentiators
- Ability to align Harness capabilities with specific customer needs
- Clear ROI and business value messaging
3. Opportunity Identification
- Demonstration of how to spot potential use cases for Harness
- Understanding of which customer scenarios are best suited for each module
- Ability to qualify opportunities and map features to pain points
4. Presentation Skills
- Clear and confident delivery
- Logical flow of information
- Professional communication style
- Engaging storytelling and customer-focused narrative
5. Customization and Internalization
- Evidence that you've made the pitch your own, not just repeating scripted content
- Integration of training materials with your own insights and examples
- Authenticity and personal style in delivery
- Creative adaptation of demo scenarios to customer context
-
Apply Your Skills
- Use this environment for customer proof-of-concepts
- Adapt the demo for specific customer scenarios
- Build additional demo scenarios using the same infrastructure
-
Stay Current
- Join Harness partner community events and webinars
- Access updated demo materials and best practices
- Share your customizations and learnings with other partners
-
Get Certified
- Complete Harness certification programs
- Earn Harness professional badges
- Advance your partner enablement journey
- Lab Guides: See markdown/ directory for step-by-step instructions
- Harness Documentation: docs.harness.io
- Automation Scripts: start-demo.sh and stop-demo.sh
For questions or assistance:
- Contact your Harness Partner Manager
- Submit issues via GitHub
- Join the Harness Partner Community
- Frontend: Angular 17 application with Harness Feature Flags integration
- Backend: Django 5.0 REST API
- Local Kubernetes: Rancher Desktop (recommended) or minikube
- Monitoring: Prometheus for continuous verification metrics
- CI/CD: Harness Cloud for builds, local K8s for deployments
- Image Storage: Docker Hub
We welcome contributions and suggestions to improve this demo kit. Please submit pull requests or open issues for any enhancements.
Note: This demo kit is maintained by Harness.io for partner use. While it's designed to be self-contained, partners are encouraged to customize and extend it based on specific customer needs.

