Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
a273324
moving ec to top level
paigecalvert Nov 20, 2024
c853bcd
edits
paigecalvert Nov 20, 2024
738262b
Move EC to top level
paigecalvert Nov 21, 2024
fd3ddb9
divide automation topic
paigecalvert Nov 21, 2024
11673ad
update kots install overview
paigecalvert Nov 21, 2024
c1d6735
split up configvalues
paigecalvert Nov 21, 2024
d679872
split info about deleting kots
paigecalvert Nov 21, 2024
8a1309c
fix partials
paigecalvert Nov 21, 2024
d0975e2
sidebar edits
paigecalvert Nov 21, 2024
8d6ea33
edits
paigecalvert Nov 21, 2024
dc1563d
remove configvalues topic
paigecalvert Nov 21, 2024
49d22e5
missing partial
paigecalvert Nov 21, 2024
325a101
Merge branch 'main' into move-ec
paigecalvert Nov 21, 2024
b4b770d
fixing xrefs
paigecalvert Nov 21, 2024
96af7c7
fixing anchor links
paigecalvert Nov 21, 2024
f898277
fixing xrefs
paigecalvert Nov 21, 2024
a930a70
update topic title
paigecalvert Nov 21, 2024
2faac97
edit xrefs
paigecalvert Nov 22, 2024
b49e2f0
edit
paigecalvert Nov 22, 2024
007e5aa
reorder
paigecalvert Dec 2, 2024
8d2338b
remove embedded k8s overview
paigecalvert Dec 2, 2024
b62ad72
updating xrefs
paigecalvert Dec 5, 2024
053e218
updating xrefs
paigecalvert Dec 5, 2024
c10e083
editing xrefs to kots install overview page
paigecalvert Dec 5, 2024
cfba2fe
remove distributing overview topic and redirect
paigecalvert Dec 5, 2024
9c29375
removing embedded kurl language
paigecalvert Dec 5, 2024
2014300
update xrefs
paigecalvert Dec 5, 2024
89b2b18
move kustomize topic
paigecalvert Dec 5, 2024
6e58bf0
sidebar edit
paigecalvert Dec 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/enterprise/cluster-management-add-nodes.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Adding Nodes to kURL Clusters

This topic describes how to add primary and secondary nodes to an embedded cluster provisioned with Replicated kURL.
This topic describes how to add primary and secondary nodes to a Replicated kURL cluster.

## Overview

You can generate commands in the Replicated KOTS Admin Console to join additional primary and secondary nodes to embedded kURL clusters. Primary nodes run services that control the cluster. Secondary nodes run services that control the pods that host the application containers. Adding nodes can help manage resources to ensure that the application runs smoothly.
You can generate commands in the Replicated KOTS Admin Console to join additional primary and secondary nodes to kURL clusters. Primary nodes run services that control the cluster. Secondary nodes run services that control the pods that host the application containers. Adding nodes can help manage resources to ensure that the application runs smoothly.

For high availability clusters, Kubernetes recommends using at least three primary nodes, and that you use an odd number of nodes to help with leader selection if machine or zone failure occurs. For more information, see [Creating Highly Available Clusters with kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) in the Kubernetes documentation.

Expand Down
70 changes: 23 additions & 47 deletions docs/enterprise/delete-admin-console.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,20 @@
# Deleting the Admin Console and Removing Applications

This topic describes how to remove installed applications and delete the Replicated Admin Console from a cluster. See the following sections:
* [Remove an Application](#remove-an-application)
* [Delete the Admin Console](#delete-the-admin-console)
This topic describes how to remove installed applications and delete the Replicated Admin Console from a cluster.

## Remove an Application

This section describes how to remove an application instance that was installed with KOTS in an existing cluster.

### About Removing an Installed Application Instance

The Replicated KOTS CLI `kots remove` command removes the reference to an installed application from the Admin Console. When you use `kots remove`, the Admin Console no longer manages the application because the record of that application’s installation is removed. This means that you can no longer manage the application through the Admin Console or through the KOTS CLI.

By default, `kots remove` does not delete any of the installed Kubernetes resources for the application from the cluster. To remove both the reference to an application from the Admin Console and remove any resources for the application from the cluster, you can run `kots remove` with the `--undeploy` flag.

It can be useful to remove only the reference to an application from the Admin Console if you want to reinstall the application, but you do not want to recreate the namespace or other Kubernetes resources. For example, if you installed an application using an incorrect license file and need to reinstall with the correct license.

### Procedure

To remove an application:

Expand Down Expand Up @@ -46,43 +50,39 @@ To remove an application:

## Delete the Admin Console

When you install an application with the Admin Console, Replicated KOTS also creates the Kubernetes resources for the Admin Console itself on the cluster. The Admin Console includes Deployments and Services, Secrets, and other resources such as StatefulSets and PersistentVolumeClaims.

By default, KOTS also creates Kubernetes ClusterRole and ClusterRoleBinding resources that grant permissions to the Admin Console on the cluster level. These `kotsadm-role` and `kotsadm-rolebinding` resources are managed outside of the namespace where the Admin Console is installed. Alternatively, when the Admin Console is installed with namespace-scoped access, KOTS creates Role and RoleBinding resources inside the namespace where the Admin Console is installed.

If you need to completely delete the Admin Console and an application installation, such as during testing, follow one of these procedures depending on the type of cluster where you installed the Admin Console:

* **Existing cluster**: Manually delete the Admin Console Kubernetes objects and resources from the cluster. See [Delete from an Existing Cluster](#existing) below.
* **Embedded cluster**: Remove Kubernetes from the VM where the cluster is installed. See [Delete from an Embedded Cluster](#embedded) below.
This section describes how to remove the KOTS Admin Console from an existing cluster.

:::note
These procedures do not uninstall the KOTS CLI. To uninstall the KOTS CLI, see [Uninstall](https://docs.replicated.com/reference/kots-cli-getting-started#uninstall) in _Installing the KOTS CLI_.
:::
### About Deleting the Admin Console from an Existing Cluster

### Delete from an Existing Cluster {#existing}
When you install an application, KOTS creates the Kubernetes resources for the Admin Console itself on the cluster. The Admin Console includes Deployments and Services, Secrets, and other resources such as StatefulSets and PersistentVolumeClaims.

In existing cluster installations, if the Admin Console is not installed in the `default` namespace, then you delete the Admin Console by deleting the namespace where it is installed.
By default, KOTS also creates Kubernetes ClusterRole and ClusterRoleBinding resources that grant permissions to the Admin Console on the cluster level. These `kotsadm-role` and `kotsadm-rolebinding` resources are managed outside of the namespace where the Admin Console is installed. Alternatively, when the Admin Console is installed with namespace-scoped access, KOTS creates Role and RoleBinding resources inside the namespace where the Admin Console is installed.

If you installed the Admin Console with namespace-scoped access, then the Admin Console Role and RoleBinding RBAC resources are also deleted when you delete the namespace. Alternatively, if you installed with the default cluster-scoped access, then you manually delete the Admin Console ClusterRole and ClusterRoleBindings resources from the cluster.
In existing cluster installations, if the Admin Console is not installed in the `default` namespace, then you delete the Admin Console by deleting the namespace where it is installed.

The application vendor can require, support, or not support namespace-scoped installations. For more information, see [supportMinimalRBACPrivileges](/reference/custom-resource-application#supportminimalrbacprivileges) and [requireMinimalRBACPrivileges](/reference/custom-resource-application#requireminimalrbacprivileges) in _Application_.
If you installed the Admin Console with namespace-scoped access, then the Admin Console Role and RoleBinding RBAC resources are also deleted when you delete the namespace. Alternatively, if you installed with the default cluster-scoped access, then you manually delete the Admin Console ClusterRole and ClusterRoleBindings resources from the cluster. For more information, see [supportMinimalRBACPrivileges](/reference/custom-resource-application#supportminimalrbacprivileges) and [requireMinimalRBACPrivileges](/reference/custom-resource-application#requireminimalrbacprivileges) in _Application_.

For more information about installing with cluster- or namespace-scoped access, see [RBAC Requirements](/enterprise/installing-general-requirements#rbac-requirements) in _Installation Requirements_.

To delete the Admin Console from an existing cluster:
### Procedure

To completely delete the Admin Console from an existing cluster:

1. Run the following command to delete the namespace where the Admin Console is installed:

:::note
* You cannot delete the `default` namespace.
* This command deletes everything inside the specified namespace, including the Admin Console Role and RoleBinding resources if you installed with namespace-scoped access.
:::important
This command deletes everything inside the specified namespace, including the Admin Console Role and RoleBinding resources if you installed with namespace-scoped access.
:::

```
kubectl delete ns NAMESPACE
```
Replace `NAMESPACE` with the name of the namespace where the Admin Console is installed.

:::note
You cannot delete the `default` namespace.
:::

1. (Cluster-scoped Access Only) If you installed the Admin Console with the default cluster-scoped access, run the following commands to delete the Admin Console ClusterRole and ClusterRoleBinding from the cluster:

```
Expand All @@ -93,28 +93,4 @@ To delete the Admin Console from an existing cluster:
kubectl delete clusterrolebinding kotsadm-rolebinding
```

### Delete from an Embedded Cluster {#embedded}

If you installed on a cluster created by Replicated kURL, KOTS installs the Admin Console in the `default` namespace. Kubernetes does not allow the `default` namespace to be deleted.

To delete the Admin Console from an embedded cluster, use the kURL `tasks.sh` `reset` command to remove Kubernetes from the system.

:::important
The `reset` command is intended to be used only on development servers. It has the potential to leave your machine in an unrecoverable state. It is not recommended unless you are able to discard this server and provision a new one.
:::

Instead of using the `reset` command, you can also discard your current VM (if you are using one) and recreate the VM with a new OS to reinstall the Admin Console and an application.

For more information about the `reset` command, see [Resetting a Node](https://kurl.sh/docs/install-with-kurl/managing-nodes#reset-a-node) in the kURL documentation.

To delete the Admin Console from an embedded cluster:

1. Run the following command to remove Kubernetes from the system:

```
curl -sSL https://k8s.kurl.sh/latest/tasks.sh | sudo bash -s reset
```

1. Follow the instructions in the output of the command to manually remove any files that the `reset` command does not remove.

If the `reset` command is unsuccessful, discard your current VM, and recreate the VM with a new OS to reinstall the Admin Console and an application.
1. (Optional) To uninstall the KOTS CLI, see [Uninstall](https://docs.replicated.com/reference/kots-cli-getting-started#uninstall) in _Installing the KOTS CLI_.
8 changes: 4 additions & 4 deletions docs/enterprise/image-registry-kurl.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Image Registry for kURL Clusters
# Working with the kURL Image Registry

This topic describes the Replicated kURL registry for kURL clusters.

Expand Down Expand Up @@ -26,7 +26,7 @@ For more information, see [admin-console garbage-collect-images](/reference/kots

## Disable Image Garbage Collection

Image garbage collection is enabled by default for embedded kURL clusters that use the kURL registry.
Image garbage collection is enabled by default for kURL clusters that use the kURL registry.

To disable image garbage collection:

Expand Down Expand Up @@ -56,8 +56,8 @@ The kURL registry image garbage collection feature has following limitations:

To prevent this from happening, include the optional images in the `additionalImages` list of the Application custom resource. For more information, see [`additionalImages`](/reference/custom-resource-application#additionalimages) in _Application_.

* **Shared Image Registries**: The image garbage collection process assumes that the registry is not shared with any other instances of Replicated KOTS, nor shared with any external applications. If the embedded kURL registry is used by another external application, disable garbage collection to prevent image loss.
* **Shared Image Registries**: The image garbage collection process assumes that the registry is not shared with any other instances of Replicated KOTS, nor shared with any external applications. If the built-in kURL registry is used by another external application, disable garbage collection to prevent image loss.

* **Customer Supplied Registries**: Image garbage collection is supported only when used with the embedded kURL registry. If the KOTS instance is configured to use a different registry, disable garbage collection to prevent image loss.
* **Customer Supplied Registries**: Image garbage collection is supported only when used with the built-in kURL registry. If the KOTS instance is configured to use a different registry, disable garbage collection to prevent image loss.

* **Application Rollbacks**: Image garbage collection has no effect when the `allowRollback` field in the KOTS Application custom resource is set to `true`. For more information, see [Application](/reference/custom-resource-application) in _KOTS Custom Resources_.
4 changes: 2 additions & 2 deletions docs/enterprise/image-registry-settings.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ This topic describes how to configure private registry settings in the Replicate

Using a private registry lets you create a custom image pipeline. Any proprietary configurations that you make to the application are shared only with the groups that you allow access, such as your team or organization. You also have control over the storage location, logging messages, load balancing requests, and other configuration options.

Private registries can be used with online or air gap clusters. For embedded kURL clusters, if the Replicated kURL installer spec includes the kURL Registry add-on, then the embedded registry is used to host the application images. For more information about the kURL Registry add-on, see [Image Registry for kURL Clusters](image-registry-kurl).
Private registries can be used with online or air gap clusters. For kURL clusters, if the Replicated kURL installer spec includes the kURL Registry add-on, then the built-in kURL registry is used to host the application images. For more information, see [Working with the kURL Image Registry](image-registry-kurl).

## Prerequisites

Your domain must support a Docker V2 protocol. For more information, see [Private Registry Requirements](installing-general-requirements#private-registry-requirements) in _Installation Requirements_.
Your domain must support a Docker V2 protocol. For more information, see [Compatible Image Registries](installing-general-requirements#registries) in _KOTS Installation Requirements_.

## Configure Private Registries in Online Clusters

Expand Down
64 changes: 64 additions & 0 deletions docs/enterprise/installing-embedded-automation.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
import ConfigValuesExample from "../partials/configValues/_configValuesExample.mdx"
import ConfigValuesProcedure from "../partials/configValues/_config-values-procedure.mdx"

# Installing with Embedded Cluster from the Command Line
Copy link
Contributor Author

@paigecalvert paigecalvert Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EC gets its own install automation topic in its own section

In a follow up PR, could also consider making sure that each of the command line install topics are discoverable from the CI/CD section of the docs where people might need the info


This topic describes how to install an application with Replicated Embedded Cluster from the command line.

## Overview

You can use the command line to install an application with Replicated Embedded Cluster. A common use case for installing from the command line is to automate installation, such as performing headless installations as part of CI/CD pipelines.

To install from the command line, you provide all the necessary installation assets, such as the license file and the application config values, with the installation command rather than through the Admin Console UI. Any preflight checks defined for the application run automatically during headless installations from the command line rather than being displayed in the Admin Console.

## Prerequisite

Create a ConfigValues YAML file to define the configuration values for the application release. The ConfigValues file allows you to pass the configuration values for an application from the command line with the install command, rather than through the Admin Console UI. For air-gapped environments, ensure that the ConfigValues file can be accessed from the installation environment.

The KOTS ConfigValues file includes the fields that are defined in the KOTS Config custom resource for an application release, along with the user-supplied and default values for each field, as shown in the example below:

<ConfigValuesExample/>

<ConfigValuesProcedure/>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ info about generating the configvalues file is added to a partial rather than being in its own topic. This solved the problem of trying to find a home for a topic that cut across three different installers


## Online (Internet-Connected) Installation

To install with Embedded Cluster in an online environment:

1. Follow the steps provided in the Vendor Portal to download and untar the Embedded Cluster installation assets. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded).

1. Run the following command to install:

```bash
sudo ./APP_SLUG install --license-file PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--admin-console-password ADMIN_CONSOLE_PASSWORD
```

Replace:
* `APP_SLUG` with the unique slug for the application.
* `LICENSE_FILE` with the customer license.
* `ADMIN_CONSOLE_PASSWORD` with a password for accessing the Admin Console.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.

## Air Gap Installation

To install with Embedded Cluster in an air-gapped environment:

1. Follow the steps provided in the Vendor Portal to download and untar the Embedded Cluster air gap installation assets. For more information, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap).

1. Ensure that the Embedded Cluster installation assets are available on the air-gapped machine, then run the following command to install:

```bash
sudo ./APP_SLUG install --license-file PATH_TO_LICENSE \
--config-values PATH_TO_CONFIGVALUES \
--admin-console-password ADMIN_CONSOLE_PASSWORD \
--airgap-bundle PATH_TO_AIRGAP_BUNDLE
```

Replace:
* `APP_SLUG` with the unique slug for the application.
* `LICENSE_FILE` with the customer license.
* `PATH_TO_CONFIGVALUES` with the path to the ConfigValues file.
* `ADMIN_CONSOLE_PASSWORD` with a password for accessing the Admin Console.
* `PATH_TO_AIRGAP_BUNDLE` with the path to the Embedded Cluster `.airgap` bundle for the release.
19 changes: 19 additions & 0 deletions docs/enterprise/installing-embedded-requirements.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
import EmbeddedClusterRequirements from "../partials/embedded-cluster/_requirements.mdx"
import EmbeddedClusterPortRequirements from "../partials/embedded-cluster/_port-reqs.mdx"
import FirewallOpenings from "../partials/install/_firewall-openings.mdx"

# Embedded Cluster Installation Requirements
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New embedded cluster-specific installation requirements topic


This topic lists the installation requirements for Replicated Embedded Cluster. Ensure that the installation environment meets these requirements before attempting to install.

## System Requirements

<EmbeddedClusterRequirements/>

## Port Requirements

<EmbeddedClusterPortRequirements/>

## Firewall Openings for Online Installations

<FirewallOpenings/>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ note that I decided to just put the firewall openings for online installs in a partial for now. It appears in each of the installer requirements topics

2 changes: 1 addition & 1 deletion docs/enterprise/installing-embedded.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Before you install, complete the following prerequisites:

<Prerequisites/>

* Ensure that the required domains are accessible from servers performing the installation. See [Firewall Openings for Online Installations](/enterprise/installing-general-requirements#firewall-openings-for-online-installations).
* Ensure that the required domains are accessible from servers performing the installation. See [Firewall Openings for Online Installations](/enterprise/installing-embedded-requirements#firewall-openings-for-online-installations).

## Install

Expand Down
4 changes: 2 additions & 2 deletions docs/enterprise/installing-existing-cluster-airgapped.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ import PushKotsImages from "../partials/install/_push-kotsadm-images.mdx"
import PlaceholderRoCreds from "../partials/install/_placeholder-ro-creds.mdx"
import KotsVersionMatch from "../partials/install/_kots-airgap-version-match.mdx"

# Air Gap Installation in Existing Clusters
# Air Gap Installation in Existing Clusters with KOTS

<IntroExisting/>

Expand All @@ -26,7 +26,7 @@ Complete the following prerequisites:

<PrereqsExistingCluster/>

* Ensure that there is a compatible Docker image registry available inside the network. For more information about Docker registry compatibility, see [Private Registry Requirements](/enterprise/installing-general-requirements#private-registry-requirements).
* Ensure that there is a compatible Docker image registry available inside the network. For more information about Docker registry compatibility, see [Compatible Image Registries](/enterprise/installing-general-requirements#registries).

KOTS rewrites the application image names in all application manifests to read from the on-premises registry, and it re-tags and pushes the images to the on-premises registry. When authenticating to the registry, credentials with `push` permissions are required.

Expand Down
Loading