Skip to content

Commit e7283b3

Browse files
authored
[CDS-113975] Added Helm Faq (#11647)
Added Helm FAQ: Is switching between Kubernetes Deployment and Helm Deployment supported?
1 parent 2e345c2 commit e7283b3

File tree

1 file changed

+73
-0
lines changed

1 file changed

+73
-0
lines changed

docs/continuous-delivery/deploy-srv-diff-platforms/helm/helm-deployment-faqs.md

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -571,3 +571,76 @@ Users facing namespace errors should remove namespace objects from their manifes
571571
572572
### Why does Helm uninstall runs after a failed initial deployment in Helm Deployment step?
573573
Helm recommends purging the initial release if any failure happens: [https://github.com/helm/helm/issues/3353#issuecomment-358367529](https://github.com/helm/helm/issues/3353#issuecomment-358367529). Hence, Harness purges the release if the first ever release fails in Helm Deployment step. If this is not done, then the user would be manually required to clean up the failed release, otherwise all subsequent releases would fail. Harness helps users avoid this manual effort and purges the failed initial release in this case.
574+
575+
### Is switching between Kubernetes Deployment and Helm Deployment supported?
576+
577+
**Yes, switching between Kubernetes Deployment and Helm Deployment types is supported**, but requires careful planning due to some important limitations and considerations.
578+
579+
:::warning Important Considerations
580+
Switching between deployment types on the same infrastructure and namespace can lead to deployment failures due to immutable Kubernetes selector labels. Plan your migration carefully and consider using different namespaces or performing force deployments.
581+
:::
582+
583+
#### Limitations and Considerations
584+
585+
1. **Label Selector Handling**:
586+
- When switching from Kubernetes to Helm deployments, leftover selector labels (particularly `harness.io/track: stable`) can cause deployment failures, especially when skipping canary steps.
587+
- This happens because selector labels in Kubernetes are immutable and Helm's 3-way merge won't remove them from existing deployments.
588+
589+
2. **Deployment Type Compatibility**:
590+
- Canary flows typically work because Harness adds track labels in pod labels for canary deployments.
591+
- Rolling deployments (skipping canary steps) may fail due to label selector mismatches between the existing deployment and the new manifests.
592+
593+
3. **Maintenance Mode Deployments**:
594+
- When services are in maintenance mode (replica: 0), skipping canary steps is often desired to avoid spinning up unnecessary pods.
595+
- However, this scenario is most prone to selector mismatch issues when switching deployment types.
596+
597+
#### Best Practices for Switching Deployment Types
598+
599+
1. **Perform a Force Deployment**:
600+
- When switching deployment types, perform a forced fresh deployment (delete and redeploy) to remove stale selectors.
601+
- This is especially important for services that are in maintenance mode or not serving traffic.
602+
603+
2. **Enable "Skip Harness Label Selector" Setting**:
604+
- For Kubernetes deployments, enable the ["Skip Harness label selector tracking" setting](/docs/continuous-delivery/deploy-srv-diff-platforms/kubernetes/cd-kubernetes-category/skip-harness-label-selector-tracking-on-kubernetes-deployments).
605+
- This helps when you need to switch between canary and rolling deployment flows.
606+
607+
3. **Use Different Release Names and Namespaces**:
608+
- When deploying the same service using both Kubernetes and Helm deployment types, use different release names and consider using separate namespaces to avoid conflicts.
609+
- Avoid deploying the same service to the same infrastructure using both deployment types simultaneously.
610+
611+
4. **Check Delegate Version**:
612+
- Ensure your delegate is updated to the latest version to benefit from fixes related to label handling.
613+
- If you can't update the delegate, consider updating the `harness-helm-post-renderer` binary using the following commands in the delegate's INIT_SCRIPT:
614+
```bash
615+
curl -f -s -L -o client-tools/harness-helm-post-renderer/v0.1.5/harness-helm-post-renderer \
616+
https://app.harness.io/public/shared/tools/harness-helm-post-renderer/release/v0.1.5/bin/linux/amd64/harness-helm-post-renderer
617+
export PATH=client-tools/harness-helm-post-renderer/v0.1.5/:$PATH
618+
```
619+
620+
5. **Feature Flag Requirements**:
621+
- Ensure the feature flag `CDS_HELM_STEADY_STATE_CHECK_1_16_V2_NG` is enabled for improved Helm steady state checks.
622+
- Contact [Harness Support](mailto:[email protected]) if you need this feature flag enabled.
623+
624+
#### Troubleshooting Common Errors
625+
626+
If you encounter errors like:
627+
```
628+
Error: UPGRADE FAILED: cannot patch "[resource]" with kind Deployment: Deployment.apps "[resource]" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"[resource]", ..., "harness.io/track":"stable", ...}: selector does not match template labels
629+
```
630+
631+
This indicates a label selector mismatch. Follow these steps:
632+
1. Check if `harness.io/track: stable` exists in your deployment's selector labels using:
633+
```bash
634+
kubectl get deployment [deployment-name] -o yaml
635+
```
636+
2. Perform a force deployment by deleting the existing deployment first.
637+
3. Deploy again with your desired configuration.
638+
4. If the issue persists, verify that your Helm chart doesn't contain hardcoded `harness.io/track` labels in the selector.
639+
640+
#### Migration Strategy
641+
642+
For a smooth transition between deployment types:
643+
1. **Plan the migration** during maintenance windows when possible.
644+
2. **Test the switch** in non-production environments first.
645+
3. **Document your current configuration** before making changes.
646+
4. **Consider using different services** for different deployment types if you need both approaches.

0 commit comments

Comments
 (0)