diff --git a/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp b/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp new file mode 100644 index 0000000000..0ea6d7a7f8 Binary files /dev/null and b/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp differ diff --git a/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp b/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp index 3a8fad48ae..28990d9169 100644 Binary files a/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp and b/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp differ diff --git a/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp b/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp index a4fb7d04ff..4a2791b132 100644 Binary files a/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp and b/public/docs/i/1000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp differ diff --git a/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp b/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp new file mode 100644 index 0000000000..0ea6d7a7f8 Binary files /dev/null and b/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp differ diff --git a/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp b/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp index 3a8fad48ae..28990d9169 100644 Binary files a/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp and b/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp differ diff --git a/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp b/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp index a4fb7d04ff..4a2791b132 100644 Binary files a/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp and b/public/docs/i/2000/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp differ diff --git a/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp b/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp new file mode 100644 index 0000000000..0ea6d7a7f8 Binary files /dev/null and b/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.webp differ diff --git a/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp b/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp index 26e63564e4..9cde627b30 100644 Binary files a/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp and b/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.webp differ diff --git a/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp b/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp index e7b7ea533f..8c4d511118 100644 Binary files a/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp and b/public/docs/i/600/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.webp differ diff --git a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png new file mode 100644 index 0000000000..0c87a89648 Binary files /dev/null and b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png differ diff --git a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-default-namespace.png b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-default-namespace.png deleted file mode 100644 index a8fc2c5c8c..0000000000 Binary files a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-default-namespace.png and /dev/null differ diff --git a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png index fc96454fdf..0506695d2a 100644 Binary files a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png and b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png differ diff --git a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png index df6a042aca..80973c84ef 100644 Binary files a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png and b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png differ diff --git a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-success.png b/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-success.png deleted file mode 100644 index c97c86eca7..0000000000 Binary files a/public/docs/i/x/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-success.png and /dev/null differ diff --git a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png new file mode 100644 index 0000000000..b921963ed2 Binary files /dev/null and b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png differ diff --git a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png.json b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png.json new file mode 100644 index 0000000000..bd671183b1 --- /dev/null +++ b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png.json @@ -0,0 +1 @@ +{"width":600,"height":601,"updated":"2026-01-15T05:15:01.509Z"} \ No newline at end of file diff --git a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png index 0aa76a87a1..da9b4c9475 100644 Binary files a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png and b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png differ diff --git a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png.json b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png.json index 741f8c74c6..e9cc29ed33 100644 --- a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png.json +++ b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-config.png.json @@ -1,5 +1 @@ -{ - "width": 970, - "height": 978, - "updated": "2025-08-01T08:52:49.163Z" -} \ No newline at end of file +{"width":601,"height":623,"updated":"2026-01-15T05:15:01.529Z"} \ No newline at end of file diff --git a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png index a472cf1499..34b99bf9f1 100644 Binary files a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png and b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png differ diff --git a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png.json b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png.json index f28e783e52..090909d682 100644 --- a/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png.json +++ b/public/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png.json @@ -1,5 +1 @@ -{ - "width": 970, - "height": 1184, - "updated": "2025-08-01T08:52:49.163Z" -} \ No newline at end of file +{"width":602,"height":1007,"updated":"2026-01-15T05:15:01.552Z"} \ No newline at end of file diff --git a/src/pages/docs/installation/load-balancers/index.mdx b/src/pages/docs/installation/load-balancers/index.mdx index f4853c15bc..4887a14347 100644 --- a/src/pages/docs/installation/load-balancers/index.mdx +++ b/src/pages/docs/installation/load-balancers/index.mdx @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2023-01-01 -modDate: 2025-12-08 +modDate: 2026-01-15 title: Load Balancers navTitle: Overview navSection: Load Balancers @@ -17,7 +17,7 @@ Octopus Deploy can work with any http/https load balancer technology. There are Octopus Server provides a health check endpoint for your load balancer to ping: `/api/octopusservernodes/ping`. :::figure -![](/docs/img/shared-content/administration/images/load-balance-ping.png) +![Load balancer ping UI](/docs/img/shared-content/administration/images/load-balance-ping.png) ::: Making a standard `HTTP GET` request to this URL on your Octopus Server nodes will return: @@ -44,7 +44,7 @@ Polling tentacles deserve special attention due to how they work with Octopus De We recommend having a dedicated URL for each node in the primary region and routing all traffic through a load balancer or a traffic manager. When you have to fail over to the secondary region, update the dedicated URLs to point to a corresponding node in the secondary region. :::div{.warning} -Important! You must configure the traffic to be “pass through” with no SSL off-loading. The tentacles and Octopus Deploy establish a two-way trust via certificates. If a third unknown certificate is introduced, the tentacle and Octopus deploy will reject the connection. +Important! You must configure the traffic to be "pass through" with no SSL off-loading. The tentacles and Octopus Deploy establish a two-way trust via certificates. If a third unknown certificate is introduced, the tentacle and Octopus deploy will reject the connection. ::: ## gRPC Services @@ -52,7 +52,8 @@ Important! You must configure the traffic to be “pass through” with no SSL o Several Octopus features (eg. Kubernetes Live Object Status and Argo CD integration) rely on communications via gRPC that require specific configuration to account for certificate trust. Octopus generates a self signed certificate for gRPC communications. When the a gRPC client needs to connect to Octopus via a load balancer, there are two common methods to achieve this: -1. Using TLS/SSL bridging, with verification disabled between the load balancer and Octopus + +1. Using TLS/SSL bridging, with verification disabled between the load balancer and Octopus. Additional configuration will be required for the gRPC client to trust the load balancer certificate. 2. Using TLS/SSL passthrough ## Third Party Load Balancers diff --git a/src/pages/docs/kubernetes/live-object-status/troubleshooting/index.md b/src/pages/docs/kubernetes/live-object-status/troubleshooting/index.md index 57efe161b0..6cd6857c22 100644 --- a/src/pages/docs/kubernetes/live-object-status/troubleshooting/index.md +++ b/src/pages/docs/kubernetes/live-object-status/troubleshooting/index.md @@ -27,15 +27,21 @@ Support for running the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes ### gRPC connections via a load balancer -Octopus generates a self signed certificate for gRPC communications like those between Octopus and Kubernetes monitor and requires specific configuration. +Octopus generates a self-signed certificate for gRPC communications like those between Octopus and Kubernetes monitor and requires specific configuration. Refer to the [load balancer documentation](/docs/installation/load-balancers#grpc-services) for further information. +### Certificate errors when trying to create gRPC connections + +The self-signed certificate is only useful for simple scenarios where the Kubernetes monitor can talk directly to Octopus server (or is proxied with TLS passthrough). + +Refer to the [agent installation docs](/docs/kubernetes/targets/kubernetes-agent#grpc-certificates) for more options when using custom certificates. + ## Runtime ### Failed to establish connection with Kubernetes Monitor \{#failed-to-establish–connection-with-kubernetes-monitor} -Some actions, such as logs and events, require per request communication with the Kubernetes monitor running in your cluster. +Some actions, such as logs and events, require per request communication with the Kubernetes monitor running in your cluster. If the Kubernetes monitor cannot be accessed, follow these steps to determine why: @@ -45,9 +51,9 @@ If the Kubernetes monitor cannot be accessed, follow these steps to determine wh In almost all cases, we have found restarting the Kubernetes monitor pod will re-establish connection if there are no external factors at play. Please reach out to support if you are finding cases of repeated, unexpected failure. -### We couldn’t find a Kubernetes monitor associated with the deployment target \{#kubernetes-monitor-not-found} +### We couldn't find a Kubernetes monitor associated with the deployment target \{#kubernetes-monitor-not-found} -Similar to the [error above](#failed-to-establish–connection-with-kubernetes-monitor), however more severe. +Similar to the [error above](/docs/kubernetes/live-object-status/troubleshooting#failed-to-establish–connection-with-kubernetes-monitor), however more severe. This error will be shown when Octopus fails to find the registration of a Kubernetes monitor at all. If the Kubernetes agent and monitor are both still running in your Kubernetes cluster, this means the Kubernetes monitor will need to be re-registered with Octopus. @@ -71,10 +77,12 @@ The rate limit is not a hard stop to messages being sent between Octopus Server Objects are reported out of sync when the manifest the Kubernetes cluster sends back to use does not match the one that Octopus applied in your deployment. This can happen for a number of reasons, including + - Someone has made an update to the object outside of Octopus deployments - A controller is automatically making changes to the object on your cluster - There are additional fields that Kubernetes does not recognize in the applied manifest that Kubernetes automatically removes from the reported live manifest If possible, we recommend ensuring that + - Octopus is the only entity to modify your deployments - You craft your Kubernetes manifests to ensure that there are no invalid fields diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md index 2b42d73a20..3e7eaa7bf3 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/automated-installation.md @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2024-05-14 -modDate: 2024-08-30 +modDate: 2026-01-15 title: Automated Installation description: How to automate the installation and management of the Kubernetes Agent navOrder: 40 @@ -33,17 +33,23 @@ Always specify the major version in the **version** property on the **helm_relea When upgrading to a new major version of the Agent, create a separate resource to ensure the Helm values match the updated schema. [Automatic upgrade support](/docs/kubernetes/targets/kubernetes-agent/upgrading#automatic-updates-coming-in-20234) is expected in version 2023.4. ::: +:::div{.warning} +It is recommended to completely delete the Kubernetes namespace when removing a Kubernetes agent. + +If possible, prefer replacing `create_namespace = true` with an explicit Kubernetes namespace resource in your terraform configuration. +::: + ```ruby terraform { required_providers { octopusdeploy = { - source = "OctopusDeployLabs/octopusdeploy" - version = "0.30.0" + source = "OctopusDeploy/octopusdeploy" + version = "1.7.1" } helm = { - source = "hashicorp/helm" - version = "2.13.2" + source = "registry.terraform.io/hashicorp/helm" + version = "3.1.1" } } } @@ -51,105 +57,147 @@ terraform { locals { octopus_api_key = "API-XXXXXXXXXXXXXXXX" octopus_address = "https://myinstance.octopus.app" + octopus_grpc_address = "https://myinstance.octopus.app:8443" octopus_polling_address = "https://polling.myinstance.octopus.app" } +provider "octopusdeploy" { + address = local.octopus_address + api_key = local.octopus_api_key +} + provider "helm" { - kubernetes { + kubernetes = { # Configure authentication for me } } -provider "octopusdeploy" { - address = local.octopus_address - api_key = local.octopus_api_key +data "octopusdeploy_teams" "everyone" { + partial_name = "Everyone" + skip = 0 + take = 1 } -resource "octopusdeploy_space" "agent_space" { - name = "agent space" - space_managers_teams = ["teams-everyone"] +resource "octopusdeploy_space" "monitoring" { + name = "Kubernetes Examples" + description = "Terraform created examples" + space_managers_teams = [data.octopusdeploy_teams.everyone.teams[0].id] } -resource "octopusdeploy_environment" "dev_env" { - name = "Development" - space_id = octopusdeploy_space.agent_space.id +resource "octopusdeploy_environment" "example" { + name = "Example" + space_id = octopusdeploy_space.monitoring.id } +# Create the Kubernetes agent deployment target resource "octopusdeploy_polling_subscription_id" "agent_subscription_id" {} resource "octopusdeploy_tentacle_certificate" "agent_cert" {} +resource "octopusdeploy_kubernetes_agent_deployment_target" "example" { + name = "Example Kubernetes Agent" + space_id = octopusdeploy_space.monitoring.id + environments = [octopusdeploy_environment.example.id] + roles = ["k8s-agent", "monitoring-enabled"] + + thumbprint = octopusdeploy_tentacle_certificate.agent_cert.thumbprint + uri = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri +} -resource "octopusdeploy_kubernetes_agent_deployment_target" "agent" { - name = "agent-one" - space_id = octopusdeploy_space.agent_space.id - environments = [octopusdeploy_environment.dev_env.id] - roles = ["role-1", "role-2", "role-3"] - thumbprint = octopusdeploy_tentacle_certificate.agent_cert.thumbprint - uri = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri +# Create the Kubernetes monitor +resource "random_uuid" "monitor_installation" {} +resource "octopusdeploy_kubernetes_monitor" "example" { + space_id = octopusdeploy_space.monitoring.id + installation_id = random_uuid.monitor_installation.result + machine_id = octopusdeploy_kubernetes_agent_deployment_target.example.id } -resource "helm_release" "octopus_agent" { - name = "octopus-agent-release" +# Install the Kubernetes agent and monitor via Helm +resource "helm_release" "kubernetes_agent" { + name = "example-kubernetes-agent" repository = "oci://registry-1.docker.io" chart = "octopusdeploy/kubernetes-agent" version = "2.*.*" atomic = true create_namespace = true - namespace = "octopus-agent-target" - - set { - name = "agent.acceptEula" - value = "Y" - } - - set { - name = "agent.name" - value = octopusdeploy_kubernetes_agent_deployment_target.agent.name - } - - set_sensitive { - name = "agent.serverApiKey" - value = local.octopus_api_key - } - - set { - name = "agent.serverUrl" - value = local.octopus_address - } - - set { - name = "agent.serverCommsAddress" - value = local.octopus_polling_address - } - - set { - name = "agent.serverSubscriptionId" - value = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri - } - - set_sensitive { - name = "agent.certificate" - value = octopusdeploy_tentacle_certificate.agent_cert.base64 - } - - set { - name = "agent.space" - value = octopusdeploy_space.agent_space.name - } - - set { - name = "agent.deploymentTarget.enabled" - value = "true" - } - - set_list { - name = "agent.deploymentTarget.initial.environments" - value = octopusdeploy_kubernetes_agent_deployment_target.agent.environments - } - - set_list { - name = "agent.deploymentTarget.initial.tags" - value = octopusdeploy_kubernetes_agent_deployment_target.agent.roles - } + namespace = "octopus-agent-example" + + set = [ + { + name = "agent.acceptEula" + value = "Y" + }, + { + name = "agent.name" + value = octopusdeploy_kubernetes_agent_deployment_target.example.name + }, + { + name = "agent.serverUrl" + value = local.octopus_address + }, + { + name = "agent.serverCommsAddress" + value = local.octopus_polling_address + }, + { + name = "agent.serverSubscriptionId" + value = octopusdeploy_polling_subscription_id.agent_subscription_id.polling_uri + }, + { + name = "agent.space" + value = octopusdeploy_space.monitoring.name + }, + { + name = "agent.deploymentTarget.enabled" + value = "true" + }, + + # Kubernetes monitor configuration (optional) + { + name = "kubernetesMonitor.enabled" + value = "true" + }, + { + name = "kubernetesMonitor.registration.register" + value = "false" + }, + { + name = "kubernetesMonitor.monitor.serverGrpcUrl" + value = local.octopus_grpc_address + }, + { + name = "kubernetesMonitor.monitor.installationId" + value = octopusdeploy_kubernetes_monitor.example.installation_id + }, + { + name = "kubernetesMonitor.monitor.serverThumbprint" + value = octopusdeploy_kubernetes_monitor.example.certificate_thumbprint + } + ] + + set_sensitive = [ + { + name = "agent.serverApiKey" + value = local.octopus_api_key + }, + { + name = "agent.certificate" + value = octopusdeploy_tentacle_certificate.agent_cert.base64 + }, + { + name = "kubernetesMonitor.monitor.authenticationToken" + value = octopusdeploy_kubernetes_monitor.example.authentication_token + } + ] + + set_list = [ + { + name = "agent.deploymentTarget.initial.environments" + value = octopusdeploy_kubernetes_agent_deployment_target.example.environments + }, + { + name = "agent.deploymentTarget.initial.tags" + value = octopusdeploy_kubernetes_agent_deployment_target.example.roles + } + ] } ``` @@ -163,76 +211,78 @@ If you don't intend to manage the Kubernetes Agent configuration through Terrafo terraform { required_providers { helm = { - source = "hashicorp/helm" - version = "2.13.2" + source = "registry.terraform.io/hashicorp/helm" + version = "3.1.1" } } } -provider "helm" { - kubernetes { - # Configure authentication for me - } -} - locals { octopus_api_key = "API-XXXXXXXXXXXXXXXX" octopus_address = "https://myinstance.octopus.app" + octopus_grpc_address = "https://myinstance.octopus.app:8443" octopus_polling_address = "https://polling.myinstance.octopus.app" } -resource "helm_release" "octopus_agent" { - name = "octopus-agent-release" +provider "helm" { + kubernetes = { + # Configure authentication for me + } +} + +# Install the Kubernetes agent and monitor via Helm +resource "helm_release" "kubernetes_agent" { + name = "example-kubernetes-agent" repository = "oci://registry-1.docker.io" chart = "octopusdeploy/kubernetes-agent" version = "2.*.*" atomic = true create_namespace = true - namespace = "octopus-agent-target" - - set { - name = "agent.acceptEula" - value = "Y" - } - - set { - name = "agent.targetName" - value = "octopus-agent" - } - - set_sensitive { - name = "agent.serverApiKey" - value = local.octopus_api_key - } - - set { - name = "agent.serverUrl" - value = local.octopus_address - } - - set { - name = "agent.serverCommsAddress" - value = local.octopus_polling_address - } - - set { - name = "agent.space" - value = "Default" - } - - set { - name = "agent.deploymentTarget.enabled" - value = "true" - } - - set_list { - name = "agent.deploymentTarget.initial.environments" - value = ["Development"] - } + namespace = "octopus-agent-example" + + set = [ + { + name = "agent.acceptEula" + value = "Y" + }, + { + name = "agent.name" + value = "octopus-agent" + }, + { + name = "agent.serverUrl" + value = local.octopus_address + }, + { + name = "agent.serverCommsAddress" + value = local.octopus_polling_address + }, + { + name = "agent.space" + value = "Default" + }, + { + name = "agent.deploymentTarget.enabled" + value = "true" + } + ] - set_list { - name = "agent.deploymentTarget.initial.tags" - value = ["Role-1"] - } + set_sensitive = [ + { + name = "agent.serverApiKey" + value = local.octopus_api_key + } + ] + + set_list = [ + { + name = "agent.deploymentTarget.initial.environments" + value = ["Development"] + }, + { + name = "agent.deploymentTarget.initial.tags" + value = ["k8s-agent"] + } + ] } ``` diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md index 2d4cf545b1..b416eedd42 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2024-05-14 -modDate: 2024-07-31 +modDate: 2026-01-15 title: HA Cluster Support description: How to install/update the agent when running Octopus in an HA Cluster navOrder: 50 @@ -33,12 +33,14 @@ To install the agent with Octopus Deploy 2024.2 you need to adjust the Helm comm 1. Use the wizard to produce the Helm command to install the agent. 1. You may need to provide a ServerCommsAddress: you can just provide any valid URL to progress the wizard. 2. Replace the `--set agent.serverCommsAddress="..."` property with -``` + +```bash --set agent.serverCommsAddresses="{https://:/,https://:/,https://:/}" ``` + where each `:` is a unique address for an individual node. -3. Execute the Helm command in a terminal connected to the target cluster. +1. Execute the Helm command in a terminal connected to the target cluster. :::div{.warning} The new property name is `agent.serverCommsAddresses`. Note that "Addresses" is plural. @@ -61,4 +63,8 @@ oci://registry-1.docker.io/octopusdeploy/kubernetes-agent :::div{.info} Support for running the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) with high availability Octopus clusters was added in v2025.4 -::: \ No newline at end of file +::: + +The Kubernetes monitor is able to avoid configuration for each individual Octopus server node. Instead, simply set up a single load balancer endpoint for gRPC and use that url. + +Refer to the [load balancer documentation](/docs/installation/load-balancers#grpc-services) for further information. diff --git a/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md b/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md index 97ec4ef0bc..12b787d05a 100644 --- a/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md +++ b/src/pages/docs/kubernetes/targets/kubernetes-agent/index.md @@ -1,7 +1,7 @@ --- layout: src/layouts/Default.astro pubDate: 2024-04-22 -modDate: 2025-03-28 +modDate: 2026-01-15 title: Kubernetes agent navTitle: Overview navSection: Kubernetes agent @@ -100,15 +100,17 @@ To simplify this, there is an installation wizard in Octopus to generate the req :::div{.warning} Helm will use your current kubectl config, so make sure your kubectl config is pointing to the correct cluster before executing the following helm commands. You can see the current kubectl config by executing: + ```bash kubectl config view ``` + ::: ### Configuration -1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **Add Deployment Target**. -2. Select **KUBERNETES** and click **ADD** on the Kubernetes Agent card. +1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **Add Deployment Target** +2. Select **KUBERNETES** and click **ADD** on the Kubernetes Agent card 3. This launches the Add New Kubernetes Agent dialog :::figure @@ -118,8 +120,7 @@ kubectl config view 1. Enter a unique display name for the target. This name is used to generate the Kubernetes namespace, as well as the Helm release name 2. Select at least one [environment](/docs/infrastructure/environments) for the target. 3. Select at least one [target tag](/docs/infrastructure/deployment-targets/target-tags) for the target. -4. Optionally, add the name of an existing [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for the agent to use. The storage class must support the ReadWriteMany [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). -If no storage class name is added, the default Network File System (NFS) storage will be used. +4. Optionally, set the default namespace that resources are deployed to. This is only used if the step configuration or Kubernetes manifests don't specify a namespace. :::div{.warning} As the display name is used for the Helm release name, this name must be unique for a given cluster. This means that if you have a Kubernetes agent and Kubernetes worker with the same name (e.g. `production`), then they will clash during installation. @@ -127,13 +128,16 @@ As the display name is used for the Helm release name, this name must be unique If you do want a Kubernetes agent and Kubernetes worker to have the same name, Then prepend the type to the name (e.g. `worker production` and `agent production`) during installation. This will install them with unique Helm release names, avoiding the clash. After installation, the worker & target names can then be changed in the Octopus Server UI to the desired name to remove the prefix. ::: -#### Advanced options +#### Advanced settings :::figure -![Kubernetes Agent default namespace](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-default-namespace.png) +![Kubernetes Agent Advanced Settings Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-advanced-settings.png) ::: -You can choose a default Kubernetes namespace that resources are deployed to. This is only used if the step configuration or Kubernetes manifests don’t specify a namespace. +Choose if you want to install additional components, such as the [Kubernetes monitor](/docs/kubernetes/targets/kubernetes-agent/kubernetes-monitor) or the [Permissions controller](/docs/kubernetes/targets/kubernetes-agent/granular-permissions) + +Optionally, add the name of an existing [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) for the agent to use. The storage class must support the ReadWriteMany [access mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). +If no storage class name is added, the default Network File System (NFS) storage will be used. ### NFS CSI driver @@ -147,9 +151,11 @@ A requirement of using the NFS pod is the installation of the [NFS CSI Driver](h :::div{.warning} If you receive an error with the text `failed to download` or `no cached repo found` when attempting to install the NFS CSI driver via helm, try executing the following command and then retrying the install command: + ```bash helm repo update ``` + ::: ### Installation helm command @@ -160,37 +166,36 @@ At the end of the wizard, Octopus generates a Helm command that you copy and pas ![Kubernetes Agent Wizard Helm command Page](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-helm-command.png) ::: -:::div{.hint} The helm command includes a 1 hour bearer token that is used when the agent first initializes, to register itself with Octopus Server. -::: -:::div{.hint} The terminal Kubernetes context must have enough permissions to create namespaces and install resources into that namespace. If you wish to install the agent into an existing namespace, remove the `--create-namespace` flag and change the value after `--namespace` -::: If left open, the installation dialog waits for the agent to establish a connection and run a health check. Once successful, the Kubernetes agent target is ready for use! -:::figure -![Kubernetes Agent Wizard successful installation](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-wizard-success.png) -::: - :::div{.hint} A successful health check indicates that deployments can successfully be executed. ::: +### Customizing the Helm command + +Look at the Helm chart [values.yaml](https://github.com/OctopusDeploy/helm-charts/blob/main/charts/kubernetes-agent/values.yaml) file for all the available options. + +The Kubernetes monitor is deployed as a sub-chart to the Kubernetes agent. [Available values for the monitor are available here](https://github.com/OctopusDeploy/helm-charts/blob/main/charts/kubernetes-agent/kubernetes-monitor.md). All Kubernetes monitor values should be nested under a `kubernetesMonitor` key when deployed with the Kubernetes agent chart. + ## Configuring the agent with Tenants While the wizard doesn't support selecting Tenants or Tenant tags, the agent can be configured for tenanted deployments in two ways: -1. Use the Deployment Target settings UI at **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Settings** to add a Tenant and set the Tenanted Deployment Participation as required. This is done after the agent has successfully installed and registered. +- Use the Deployment Target settings UI at **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Settings** to add a Tenant and set the Tenanted Deployment Participation as required. This is done after the agent has successfully installed and registered. :::figure ![Kubernetes Agent ](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-settings-page-tenants.png) ::: -2. Set additional variables in the helm command to allow the agent to register itself with associated Tenants or Tenant tags. You also need to provider a value for the `TenantedDeploymentParticipation` value. Possible values are `Untenanted` (default), `Tenanted`, and `TenantedOrUntenanted`. +- Set additional variables in the helm command to allow the agent to register itself with associated Tenants or Tenant tags. You also need to provider a value for the `TenantedDeploymentParticipation` value. Possible values are `Untenanted` (default), `Tenanted`, and `TenantedOrUntenanted`. example to add these values: + ```bash --set agent.tenants="{,}" \ --set agent.tenantTags="{,}" \ @@ -202,6 +207,7 @@ You don't need to provide both Tenants and Tenant Tags, but you do need to provi ::: In a full command: + ```bash helm upgrade --install --atomic \ --set agent.acceptEula="Y" \ @@ -229,7 +235,7 @@ Server certificate support was added in Kubernetes agent 1.7.0 It is common for organizations to have their Octopus Deploy server hosted in an environment where it has an SSL/TLS certificate that is not part of the global certificate trust chain. As a result, the Kubernetes agent will fail to register with the target server due to certificate errors. A typical error looks like this: -``` +```log 2024-06-21 04:12:01.4189 | ERROR | The following certificate errors were encountered when establishing the HTTPS connection to the server: RemoteCertificateNameMismatch, RemoteCertificateChainErrors Certificate subject name: CN=octopus.corp.domain Certificate thumbprint: 42983C1D517D597B74CDF23F054BBC106F4BB32F @@ -237,7 +243,7 @@ Certificate thumbprint: 42983C1D517D597B74CDF23F054BBC106F4BB32F To resolve this, you need to provide the Kubernetes agent with a base64-encoded string of the public key of either the self-signed certificate or root organization CA certificate in either `.pem` or `.crt` format. When viewed as text, this will look similar to this: -``` +```text -----BEGIN CERTIFICATE----- MII... -----END CERTIFICATE----- @@ -268,6 +274,24 @@ data: octopus-server-certificate.pem: "" ``` +### gRPC certificates + +When installing the Kubernetes monitor, you will possibly encounter the same certificate issues for the gRPC communcations as you do for the Octopus server certificate. + +Depending on your load balancer configuration, you have several options for how to handle this. + +When using TLS/SSL passthrough, no additional configuration is required. The Kubernetes monitor will use the self signed certificate generated by Octopus server automatically. + +When using TLS/SSL bridging, the self-signed certificate or root organization CA certificate will need to be provided to the Helm command. This can be the same certificate as your HTTPS certificate, but it does not need to be. It must match the certificate configured in your load balancer. + +To include this in the installation command, add the following to the generated installation command: + +```bash +--set kubernetesMonitor.monitor.customCaCertificate=" +``` + +[See here](/docs/installation/load-balancers/use-nginx-as-reverse-proxy#grpc-communications) for some sample load balancer configurations for these configurations. + ## Agent tooling For all Kubernetes steps, except the `Run a kubectl script` step, the agent uses the `octopusdeploy/kubernetes-agent-tools-base` default container image to execute it's workloads. It will correctly select and pull the version of the image that's specific to the cluster's version. @@ -277,7 +301,7 @@ For the `Run a kubectl script` step, if there is a [container image](/docs/proje To override these automatically resolved tooling images, you can set the helm chart values of `scriptPods.worker.image.repository` and `scriptPods.worker.image.tag` for the agent running as a worker, or `scriptPods.deploymentTarget.image` and `scriptPods.deploymentTarget.tag` when running the agent as a deployment target. :::div{.warning} -In Octopus Server versions prior to `2024.3.7669`, the Kubernetes agent erroneously used container images defined in _all_ Kubernetes steps, not just the `Run a kubectl script` step. +In Octopus Server versions prior to `2024.3.7669`, the Kubernetes agent erroneously used container images defined in *all* Kubernetes steps, not just the `Run a kubectl script` step. ::: This image contains the minimum required tooling to run Kubernetes workloads for Octopus Deploy, namely: @@ -311,15 +335,19 @@ To check if a Kubernetes agent can be manually upgraded, navigate to the **Infra ### Helm upgrade command To upgrade a Kubernetes agent via `helm`, note the following fields from the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page: -* Helm Release Name -* Namespace + +- Helm Release Name +- Namespace Then, from a terminal connected to the cluster containing the instance, execute the following command: ```bash helm upgrade --atomic --namespace NAMESPACE HELM_RELEASE_NAME oci://registry-1.docker.io/octopusdeploy/kubernetes-agent ``` -__Replace NAMESPACE and HELM_RELEASE_NAME with the values noted__ + +:::div{.hint} +Replace NAMESPACE and HELM_RELEASE_NAME with the values noted +::: If after the upgrade command has executed, you find that there is issues with the agent, you can rollback to the previous helm release by executing: @@ -327,7 +355,6 @@ If after the upgrade command has executed, you find that there is issues with th helm rollback --namespace NAMESPACE HELM_RELEASE_NAME ``` - ## Uninstalling the Kubernetes agent To fully remove the Kubernetes agent, you need to delete the agent from the Kubernetes cluster as well as delete the deployment target from Octopus Deploy