Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 21 additions & 12 deletions src/pages/docs/kubernetes/targets/kubernetes-agent/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,17 +98,20 @@ The Kubernetes agent is installed using [Helm](https://helm.sh) via the [octopus
To simplify this, there is an installation wizard in Octopus to generate the required values.

:::div{.warning}

Helm will use your current kubectl config, so make sure your kubectl config is pointing to the correct cluster before executing the following helm commands.
You can see the current kubectl config by executing:

```bash
kubectl config view
```

:::

### Configuration

1. Navigate to **Infrastructure ➜ Deployment Targets**, and click **Add Deployment Target**.
2. Select **KUBERNETES** and click **ADD** on the Kubernetes Agent card.
2. Select **KUBERNETES** and click **ADD** on the Kubernetes Agent card.
3. This launches the Add New Kubernetes Agent dialog

:::figure
Expand All @@ -133,7 +136,7 @@ If you do want a Kubernetes agent and Kubernetes worker to have the same name, T
![Kubernetes Agent default namespace](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-default-namespace.png)
:::

You can choose a default Kubernetes namespace that resources are deployed to. This is only used if the step configuration or Kubernetes manifests dont specify a namespace.
You can choose a default Kubernetes namespace that resources are deployed to. This is only used if the step configuration or Kubernetes manifests don't specify a namespace.

### NFS CSI driver

Expand All @@ -146,10 +149,13 @@ A requirement of using the NFS pod is the installation of the [NFS CSI Driver](h
:::

:::div{.warning}

If you receive an error with the text `failed to download` or `no cached repo found` when attempting to install the NFS CSI driver via helm, try executing the following command and then retrying the install command:

```bash
helm repo update
```

:::

### Installation helm command
Expand Down Expand Up @@ -184,13 +190,14 @@ While the wizard doesn't support selecting Tenants or Tenant tags, the agent can

1. Use the Deployment Target settings UI at **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Settings** to add a Tenant and set the Tenanted Deployment Participation as required. This is done after the agent has successfully installed and registered.

:::figure
![Kubernetes Agent ](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-settings-page-tenants.png)
:::
:::figure
![Kubernetes Agent ](/docs/img/infrastructure/deployment-targets/kubernetes/kubernetes-agent/kubernetes-agent-settings-page-tenants.png)
:::

2. Set additional variables in the helm command to allow the agent to register itself with associated Tenants or Tenant tags. You also need to provider a value for the `TenantedDeploymentParticipation` value. Possible values are `Untenanted` (default), `Tenanted`, and `TenantedOrUntenanted`.

example to add these values:

```bash
--set agent.tenants="{<tenant1>,<tenant2>}" \
--set agent.tenantTags="{<tenantTag1>,<tenantTag2>}" \
Expand All @@ -202,6 +209,7 @@ You don't need to provide both Tenants and Tenant Tags, but you do need to provi
:::

In a full command:

```bash
helm upgrade --install --atomic \
--set agent.acceptEula="Y" \
Expand Down Expand Up @@ -229,15 +237,15 @@ Server certificate support was added in Kubernetes agent 1.7.0

It is common for organizations to have their Octopus Deploy server hosted in an environment where it has an SSL/TLS certificate that is not part of the global certificate trust chain. As a result, the Kubernetes agent will fail to register with the target server due to certificate errors. A typical error looks like this:

```
```text
2024-06-21 04:12:01.4189 | ERROR | The following certificate errors were encountered when establishing the HTTPS connection to the server: RemoteCertificateNameMismatch, RemoteCertificateChainErrors
Certificate subject name: CN=octopus.corp.domain
Certificate thumbprint: 42983C1D517D597B74CDF23F054BBC106F4BB32F
```

To resolve this, you need to provide the Kubernetes agent with a base64-encoded string of the public key of either the self-signed certificate or root organization CA certificate in either `.pem` or `.crt` format. When viewed as text, this will look similar to this:

```
```text
-----BEGIN CERTIFICATE-----
MII...
-----END CERTIFICATE-----
Expand Down Expand Up @@ -277,7 +285,7 @@ For the `Run a kubectl script` step, if there is a [container image](/docs/proje
To override these automatically resolved tooling images, you can set the helm chart values of `scriptPods.worker.image.repository` and `scriptPods.worker.image.tag` for the agent running as a worker, or `scriptPods.deploymentTarget.image` and `scriptPods.deploymentTarget.tag` when running the agent as a deployment target.

:::div{.warning}
In Octopus Server versions prior to `2024.3.7669`, the Kubernetes agent erroneously used container images defined in _all_ Kubernetes steps, not just the `Run a kubectl script` step.
In Octopus Server versions prior to `2024.3.7669`, the Kubernetes agent erroneously used container images defined in *all* Kubernetes steps, not just the `Run a kubectl script` step.
:::

This image contains the minimum required tooling to run Kubernetes workloads for Octopus Deploy, namely:
Expand Down Expand Up @@ -311,23 +319,24 @@ To check if a Kubernetes agent can be manually upgraded, navigate to the **Infra
### Helm upgrade command

To upgrade a Kubernetes agent via `helm`, note the following fields from the **Infrastructure ➜ Deployment Targets ➜ [DEPLOYMENT TARGET] ➜ Connectivity** page:
* Helm Release Name
* Namespace

- Helm Release Name
- Namespace

Then, from a terminal connected to the cluster containing the instance, execute the following command:

```bash
helm upgrade --atomic --namespace NAMESPACE HELM_RELEASE_NAME oci://registry-1.docker.io/octopusdeploy/kubernetes-agent
```
__Replace NAMESPACE and HELM_RELEASE_NAME with the values noted__

Note: Replace `NAMESPACE` and `HELM_RELEASE_NAME` with your own values.

If after the upgrade command has executed, you find that there is issues with the agent, you can rollback to the previous helm release by executing:

```bash
helm rollback --namespace NAMESPACE HELM_RELEASE_NAME
```


## Uninstalling the Kubernetes agent

To fully remove the Kubernetes agent, you need to delete the agent from the Kubernetes cluster as well as delete the deployment target from Octopus Deploy
Expand Down
54 changes: 31 additions & 23 deletions src/pages/docs/kubernetes/targets/kubernetes-api/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,14 @@ Kubernetes API targets are used by the [Kubernetes steps](/docs/deployments/kube
Conceptually, a Kubernetes API target represent a permission boundary and an endpoint. Kubernetes [permissions](https://oc.to/KubernetesRBAC) and [quotas](https://oc.to/KubernetesQuotas) are defined against a namespace, and both the account and namespace are captured as a Kubernetes API target, along with the cluster endpoint URL. A namespace is required when registering the Kubernetes API target with Octopus Deploy. By default, the namespace used in the registration is used in health checks and deployments. The namespace can be overwritten in the deployment process.

:::div{.hint}
From **Octopus 2022.2**, AKS target discovery has been added to the
Kubernetes Target Discovery Early Access Preview and is enabled via **Configuration ➜ Features**.
From **Octopus 2022.2**, AKS target discovery has been added to the Kubernetes Target Discovery Early Access Preview and is enabled via **Configuration ➜ Features**.

From **Octopus 2022.3** will include EKS cluster support.
:::

## Discovering Kubernetes targets

Octopus can discover Kubernetes API targets in _Azure Kubernetes Service_ (AKS) or _Amazon Elastic Container Service for Kubernetes_ (EKS) as part of your deployment using tags on your AKS or EKS resource.
Octopus can discover Kubernetes API targets in *Azure Kubernetes Service* (AKS) or *Amazon Elastic Container Service for Kubernetes* (EKS) as part of your deployment using tags on your AKS or EKS resource.

:::div{.hint}
From **Octopus 2022.3**, you can configure the well-known variables used to discover Kubernetes targets when editing your deployment process in the Web Portal. See [cloud target discovery](/docs/infrastructure/deployment-targets/cloud-target-discovery) for more information.
Expand Down Expand Up @@ -87,15 +86,15 @@ users:

The Azure Service Principal is only used with AKS clusters. To log into ACS or ACS-Engine clusters, standard Kubernetes credentials like certificates or service account tokens must be used.

:::div{.hint}
:::div{.hint}
From Kubernetes 1.26, [the default azure auth plugin has been removed from kubectl](https://github.com/kubernetes/kubernetes/blob/ad18954259eae3db51bac2274ed4ca7304b923c4/CHANGELOG/CHANGELOG-1.26.md#deprecation) so clusters targeting Kubernetes 1.26+ that have [Local Account Access disabled](https://oc.to/AKSDisableLocalAccount) in Azure, will require the worker or execution container to have access to the [kubelogin](https://oc.to/Kubelogin) CLI tool, as well as the Octopus Deployment Target setting **Login with administrator credentials** disabled. This requires **Octopus 2023.3*.

If Local Account access is enabled on the AKS cluster, the Octopus Deployment Target setting Login with administrator credentials will also need to be enabled so that the Local Accounts are used instead of the default auth plugin.
:::

- **AWS Account**: When using an EKS cluster, [AWS accounts](/docs/infrastructure/accounts/aws) allow IAM accounts and roles to be used.

The interaction between AWS IAM and Kubernetes Role Based Access Control (RBAC) can be tricky. We highly recommend reading the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html).
The interaction between AWS IAM and Kubernetes Role Based Access Control (RBAC) can be tricky. We highly recommend reading the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html).

:::div{.hint}
**Common issues:**
Expand Down Expand Up @@ -135,6 +134,7 @@ users:
-in certificate.crt `
-inkey private.key
```

```bash
#!/bin/bash
echo $1 | base64 --decode > certificate.crt
Expand All @@ -154,29 +154,31 @@ users:
7. Enter the Kubernetes cluster URL. Each Kubernetes target requires the cluster URL, which is defined in the `Kubernetes cluster URL` field. In the example YAML about, this is defined in the `server` field.
8. Optionally, select the certificate authority if you've added one. Kubernetes clusters are often protected with self-signed certificates. In the YAML example above the certificate is saved as a base 64 encoded string in the `certificate-authority-data` field.

To communicate with a Kubernetes cluster with a self-signed certificate over HTTPS, you can either select the **Skip TLS verification** option, or supply the certificate in `The optional cluster certificate authority` field.
To communicate with a Kubernetes cluster with a self-signed certificate over HTTPS, you can either select the **Skip TLS verification** option, or supply the certificate in `The optional cluster certificate authority` field.

Decoding the `certificate-authority-data` field results in a string that looks something like this (the example has been truncated for readability):
Decoding the `certificate-authority-data` field results in a string that looks something like this (the example has been truncated for readability):

```
-----BEGIN CERTIFICATE-----
XXXXXXXXXXXXXXXX...
-----END CERTIFICATE-----
```
```text
-----BEGIN CERTIFICATE-----
XXXXXXXXXXXXXXXX...
-----END CERTIFICATE-----
```

Save this text to a file called `ca.pem`, and upload it to the [Octopus certificate management area](https://oc.to/CertificatesDocumentation). The certificate can then be selected in the `cluster certificate authority` field.
Save this text to a file called `ca.pem`, and upload it to the [Octopus certificate management area](https://oc.to/CertificatesDocumentation). The certificate can then be selected in the `cluster certificate authority` field.

9. Enter the Kubernetes Namespace.
When a single Kubernetes cluster is shared across environments, resources deployed to the cluster will often be separated by environment and by application, team, or service. In this situation, the recommended approach is to create a namespace for each application and environment (e.g., `my-application-development` and `my-application-production`), and create a Kubernetes service account that has permissions to just that namespace.
When a single Kubernetes cluster is shared across environments, resources deployed to the cluster will often be separated by environment and by application, team, or service. In this situation, the recommended approach is to create a namespace for each application and environment (e.g., `my-application-development` and `my-application-production`), and create a Kubernetes service account that has permissions to just that namespace.

Where each environment has its own Kubernetes cluster, namespaces can be assigned to each application, team or service (e.g. `my-application`).
Where each environment has its own Kubernetes cluster, namespaces can be assigned to each application, team or service (e.g. `my-application`).

In both scenarios, a target is then created for each Kubernetes cluster and namespace. The `Target Role` tag is set to the application name (e.g. `my-application`), and the `Environments` are set to the matching environment.
In both scenarios, a target is then created for each Kubernetes cluster and namespace. The `Target Role` tag is set to the application name (e.g. `my-application`), and the `Environments` are set to the matching environment.

When a Kubernetes target is used, the namespace it references is created automatically if it does not already exist.
When a Kubernetes target is used, the namespace it references is created automatically if it does not already exist.

10. Select a worker pool for the target.
To make use of the Kubernetes steps, the Octopus Server or workers that will run the steps need to have the `kubectl` executable installed. Linux workers also need to have the `jq`, `xargs` and `base64` applications installed.

To make use of the Kubernetes steps, the Octopus Server or workers that will run the steps need to have the `kubectl` executable installed. Linux workers also need to have the `jq`, `xargs` and `base64` applications installed.

11. Click **SAVE**.

:::div{.warning}
Expand Down Expand Up @@ -273,27 +275,31 @@ kubectl get secret $(kubectl get serviceaccount jenkins-deployer -o jsonpath="{.
The token can then be saved as a Token Octopus account, and assigned to the Kubernetes target.

:::div{.warning}

Kubernetes versions 1.24+ no longer automatically create tokens for service accounts and they need to be manually created using the **create token** command:

```bash
kubectl create token jenkins-deployer
```

From Kubernetes version 1.29, a warning will be displayed when using automatically created Tokens. Make sure to rotate any Octopus Token Accounts to use manually created tokens via **create token** instead.
From Kubernetes version 1.29, a warning will be displayed when using automatically created Tokens. Make sure to rotate any Octopus Token Accounts to use manually created tokens via **create token** instead.

:::

## Kubectl

Kubernetes targets use the `kubectl` executable to communicate with the Kubernetes cluster. This executable must be available on the path on the target where the step is run. When using workers, this means the `kubectl` executable must be in the path on the worker that is executing the step. Otherwise, the `kubectl` executable must be in the path on the Octopus Server itself.

## Vendor Authentication Plugins {#vendor-authentication-plugins}

Prior to `kubectl` version 1.26, the logic for authenticating against various cloud providers (eg Azure Kubernetes Services, Google Kubernetes Engine) was included "in-tree" in `kubectl`. From version 1.26 onward, the cloud-vendor specific authentication code has been removed from `kubectl`, in favor of a plugin approach.

What this means for your deployments:

* Amazon Elastic Container Services (ECS): No change required. Octopus already supports using either the AWS CLI or the `aws-iam-authenticator` plugin.
* Azure Kubernetes Services (AKS): No change required. The way Octopus authenticates against AKS clusters never used the in-tree Azure authentication code, and will continue to function as normal.
- From **Octopus 2023.3**, you will need to ensure that the [kubelogin](https://oc.to/Kubelogin) CLI tool is also available if you have disabled local Kubernetes accounts.
* Google Kubernetes Engine (GKE): If you upgrade to `kubectl` 1.26 or higher, you will need to ensure that the `gke-gcloud-auth-plugin` tool is also available. More information can be found on [Google's announcement about this change](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke).
- Amazon Elastic Container Services (ECS): No change required. Octopus already supports using either the AWS CLI or the `aws-iam-authenticator` plugin.
- Azure Kubernetes Services (AKS): No change required. The way Octopus authenticates against AKS clusters never used the in-tree Azure authentication code, and will continue to function as normal.
- From **Octopus 2023.3**, you will need to ensure that the [kubelogin](https://oc.to/Kubelogin) CLI tool is also available if you have disabled local Kubernetes accounts.
- Google Kubernetes Engine (GKE): If you upgrade to `kubectl` 1.26 or higher, you will need to ensure that the `gke-gcloud-auth-plugin` tool is also available. More information can be found on [Google's announcement about this change](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke).

## Helm

Expand All @@ -313,6 +319,8 @@ If you're running into issues with your Kubernetes targets, it's possible you'll

Setting the Octopus variable `Octopus.Action.Kubernetes.OutputKubeConfig` to `True` for any deployment or runbook using a Kubernetes target will cause the generated kube config file to be printed into the logs (with passwords masked). This can be used to verify the configuration file used to connect to the Kubernetes cluster.

Setting the Octopus variable `Octopus.Action.Kubernetes.VerboseOutput` to `True` will cause successful output from Kubernetes CLI tools (`kubectl`, `helm`, `aws`, `az`, `gcloud`, etc.) to be logged at the Info level instead of Verbose. This is useful when debugging deployments to see the full output of these tools without needing to enable verbose logging for the entire deployment.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we could mention the default logging level if this flag is not set or false? For example, do we only log errors if VerboseOutput is False?


If Kubernetes targets fail their health checks, the best way to diagnose the issue to to run a `Run a kubectl CLI Script` step with a script that can inspect the various settings that must be in place for a Kubernetes target to function correctly. Octopus deployments will run against unhealthy targets by default, so the fact that the target failed its health check does not prevent these kinds of debugging steps from running.

An example script for debugging a Kubernetes target is shown below:
Expand Down
Loading