+
+[View a larger version of this image](/images/gitea-open-app.png)
+
+:::note
+KOTS uses the Kubernetes SIG Application custom resource as metadata and does not require or use an in-cluster controller to handle this custom resource. An application that follows best practices does not require cluster admin privileges or any cluster-wide components to be installed.
+:::
+
+## Add a Link
+
+To add a link to the Admin Console dashboard, include a [Kubernetes SIG Application](https://github.com/kubernetes-sigs/application#kubernetes-applications) custom resource in the release with a `spec.descriptor.links` field. The `spec.descriptor.links` field is an array of links that are displayed on the Admin Console dashboard after the application is deployed.
+
+Each link in the `spec.descriptor.links` array contains two fields:
+* `description`: The link text that will appear on the Admin Console dashboard.
+* `url`: The target URL.
+
+For example:
+
+```yaml
+# app.k8s.io/v1beta1 Application Custom resource
+
+apiVersion: app.k8s.io/v1beta1
+kind: Application
+metadata:
+ name: "gitea"
+spec:
+ descriptor:
+ links:
+ - description: About Wordpress
+ url: "https://wordpress.org/"
+```
+
+When the application is deployed, the "About Wordpress" link is displayed on the Admin Console dashboard as shown below:
+
+
+
+[View a larger version of this image](/images/dashboard-link-about-wordpress.png)
+
+For an additional example of a Kubernetes SIG Application custom resource, see [application.yaml](https://github.com/kubernetes-sigs/application/blob/master/docs/examples/wordpress/application.yaml) in the kubernetes-sigs GitHub repository.
+
+### Create URLs with User-Supplied Values Using KOTS Template Functions {#url-template}
+
+You can use KOTS template functions to template URLs in the Kubernetes SIG Application custom resource. This can be useful when all or some of the URL is a user-supplied value. For example, an application might allow users to provide their own ingress controller or load balancer. In this case, the URL can be templated to render the hostname that the user provides on the Admin Console Config screen.
+
+The following examples show how to use the KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function in the Kubernetes SIG Application custom resource `spec.descriptor.links.url` field to render one or more user-supplied values:
+
+* In the example below, the URL hostname is a user-supplied value for an ingress controller that the user configures during installation.
+
+ ```yaml
+ apiVersion: app.k8s.io/v1beta1
+ kind: Application
+ metadata:
+ name: "my-app"
+ spec:
+ descriptor:
+ links:
+ - description: Open App
+ url: 'http://{{repl ConfigOption "ingress_host" }}'
+ ```
+* In the example below, both the URL hostname and a node port are user-supplied values. It might be necessary to include a user-provided node port if you are exposing NodePort services for installations on VMs or bare metal servers with [Replicated Embedded Cluster](/vendor/embedded-overview) or [Replicated kURL](/vendor/kurl-about).
+
+ ```yaml
+ apiVersion: app.k8s.io/v1beta1
+ kind: Application
+ metadata:
+ name: "my-app"
+ spec:
+ descriptor:
+ links:
+ - description: Open App
+ url: 'http://{{repl ConfigOption "hostname" }}:{{repl ConfigOption "node_port"}}'
+ ```
+
+For more information about working with KOTS template functions, see [About Template Functions](/reference/template-functions-about).
\ No newline at end of file
diff --git a/docs/reference/admin-console-customize-app-icon.md b/docs/reference/admin-console-customize-app-icon.md
new file mode 100644
index 0000000000..7a8309f770
--- /dev/null
+++ b/docs/reference/admin-console-customize-app-icon.md
@@ -0,0 +1,58 @@
+# Customizing the Application Icon
+
+You can add a custom application icon that displays in the Replicated Admin Console and the download portal. Adding a custom icon helps ensure that your brand is reflected for your customers.
+
+:::note
+You can also use a custom domain for the download portal. For more information, see [About Custom Domains](custom-domains).
+:::
+
+## Add a Custom Icon
+
+For information about how to choose an image file for your custom application icon that displays well in the Admin Console, see [Icon Image File Recommendations](#icon-image-file-recommendations) below.
+
+To add a custom application icon:
+
+1. In the [Vendor Portal](https://vendor.replicated.com/apps), click **Releases**. Click **Create release** to create a new release, or click **Edit YAML** to edit an existing release.
+1. Create or open the Application custom resource manifest file. An Application custom resource manifest file has `apiVersion: kots.io/v1beta1` and `kind: Application`.
+
+1. In the preview section of the Help pane:
+
+ 1. If your Application manifest file is already populated with an `icon` key, the icon displays in the preview. Click **Preview a different icon** to access the preview options.
+
+ 1. Drag and drop an icon image file to the drop zone. Alternatively, paste a link or Base64 encoded data URL in the text box. Click **Preview**.
+
+ 
+
+ 1. (Air gap only) If you paste a link to the image in the text box, click **Preview** and **Base64 encode icon** to convert the image to a Base64 encoded data URL. An encoded URL displays that you can copy and paste into the Application manifest. Base64 encoding is required for images used with air gap installations.
+
+ :::note
+ If you pasted a Base64 encoded data URL into the text box, the **Base64 encode icon** button does not display because the image is already encoded. If you drag and drop an icon, the icon is automatically encoded for you.
+ :::
+
+ 
+
+ 1. Click **Preview a different icon** to preview a different icon if needed.
+
+1. In the Application manifest, under `spec`, add an `icon` key that includes a link or the Base64 encoded data URL to the desired image.
+
+ **Example**:
+
+ ```yaml
+ apiVersion: kots.io/v1beta1
+ kind: Application
+ metadata:
+ name: my-application
+ spec:
+ title: My Application
+ icon: https://kots.io/images/kotsadm-logo-large@2x.png
+ ```
+1. Click **Save Release**.
+
+
+## Icon Image File Recommendations
+
+For your custom application icon to look best in the Admin Console, consider the following recommendations:
+
+* Use a PNG or JPG file.
+* Use an image that is at least 250 by 250 pixels.
+* Export the image file at 2x.
diff --git a/docs/reference/admin-console-customize-config-screen.md b/docs/reference/admin-console-customize-config-screen.md
new file mode 100644
index 0000000000..dafe4a54fb
--- /dev/null
+++ b/docs/reference/admin-console-customize-config-screen.md
@@ -0,0 +1,135 @@
+# Creating and Editing Configuration Fields
+
+This topic describes how to use the KOTS Config custom resource manifest file to add and edit fields in the KOTS Admin Console configuration screen.
+
+## About the Config Custom Resource
+
+Applications distributed with Replicated KOTS can include a configuration screen in the Admin Console to collect required or optional values from your users that are used to run your application. For more information about the configuration screen, see [About the Configuration Screen](config-screen-about).
+
+To include a configuration screen in the Admin Console for your application, you add a Config custom resource manifest file to a release for the application.
+
+You define the fields that appear on the configuration screen as an array of `groups` and `items` in the Config custom resource:
+ * `groups`: A set of `items`. Each group must have a `name`, `title`, `description`, and `items`. For example, you can create a group of several user input fields that are all related to configuring an SMTP mail server.
+ * `items`: An array of user input fields. Each array under `items` must have a `name`, `title`, and `type`. You can also include several optional properties. For example, in a group for configuring a SMTP mail server, you can have user input fields under `items` for the SMTP hostname, port, username, and password.
+
+ There are several types of `items` supported in the Config manifest that allow you to collect different types of user inputs. For example, you can use the `password` input type to create a text field on the configuration screen that hides user input.
+
+For more information about the syntax of the Config custom resource manifest, see [Config](/reference/custom-resource-config).
+
+## About Regular Expression Validation
+
+You can use [RE2 regular expressions](https://github.com/google/re2/wiki/Syntax) (regex) to validate user input for config items, ensuring conformity to certain standards, such as valid email addresses, password complexity rules, IP addresses, and URLs. This prevents users from deploying an application with a verifiably invalid configuration.
+
+You add the `validation`, `regex`, `pattern` and `message` fields to items in the Config custom resource. Validation is supported for `text`, `textarea`, `password` and `file` config item types. For more information about regex validation fields, see [Item Validation](/reference/custom-resource-config#item-validation) in _Config_.
+
+The following example shows a common password complexity rule:
+
+```
+- name: smtp-settings
+ title: SMTP Settings
+ items:
+ - name: smtp_password
+ title: SMTP Password
+ type: password
+ help_text: Set SMTP password
+ validation:
+ regex:
+ pattern: ^(?:[\w@#$%^&+=!*()_\-{}[\]:;"'<>,.?\/|]){8,16}$
+ message: The password must be between 8 and 16 characters long and can contain a combination of uppercase letter, lowercase letters, digits, and special characters.
+```
+
+## Add Fields to the Configuration Screen
+
+To add fields to the Admin Console configuration screen:
+
+1. In the [Vendor Portal](https://vendor.replicated.com/apps), click **Releases**. Then, either click **Create release** to create a new release, or click **Edit YAML** to edit an existing release.
+1. Create or open the Config custom resource manifest file in the desired release. A Config custom resource manifest file has `kind: Config`.
+1. In the Config custom resource manifest file, define custom user-input fields in an array of `groups` and `items`.
+
+ **Example**:
+
+ ```yaml
+ apiVersion: kots.io/v1beta1
+ kind: Config
+ metadata:
+ name: my-application
+ spec:
+ groups:
+ - name: smtp_settings
+ title: SMTP Settings
+ description: Configure SMTP Settings
+ items:
+ - name: enable_smtp
+ title: Enable SMTP
+ help_text: Enable SMTP
+ type: bool
+ default: "0"
+ - name: smtp_host
+ title: SMTP Hostname
+ help_text: Set SMTP Hostname
+ type: text
+ - name: smtp_port
+ title: SMTP Port
+ help_text: Set SMTP Port
+ type: text
+ - name: smtp_user
+ title: SMTP User
+ help_text: Set SMTP User
+ type: text
+ - name: smtp_password
+ title: SMTP Password
+ type: password
+ default: 'password'
+ ```
+
+ The example above includes a single group with the name `smtp_settings`.
+
+ The `items` array for the `smtp_settings` group includes the following user-input fields: `enable_smtp`, `smtp_host`, `smtp_port`, `smtp_user`, and `smtp_password`. Additional item properties are available, such as `affix` to make items appear horizontally on the same line. For more information about item properties, see [Item Properties](/reference/custom-resource-config#item-properties) in Config.
+
+ The following screenshot shows how the SMTP Settings group from the example YAML above displays in the Admin Console configuration screen during application installation:
+
+ 
+
+1. (Optional) Add default values for the fields. You can add default values using one of the following properties:
+ * **With the `default` property**: When you include the `default` key, KOTS uses this value when rendering the manifest files for your application. The value then displays as a placeholder on the configuration screen in the Admin Console for your users. KOTS only uses the default value if the user does not provide a different value.
+
+ :::note
+ If you change the `default` value in a later release of your application, installed instances of your application receive the updated value only if your users did not change the default from what it was when they initially installed the application.
+
+ If a user did change a field from its default, the Admin Console does not overwrite the value they provided.
+ :::
+
+ * **With the `value` property**: When you include the `value` key, KOTS does not overwrite this value during an application update. The value that you provide for the `value` key is visually indistinguishable from other values that your user provides on the Admin Console configuration screen. KOTS treats user-supplied values and the value that you provide for the `value` key as the same.
+
+2. (Optional) Add regular expressions to validate user input for `text`, `textarea`, `password` and `file` config item types. For more information, see [About Regular Expression Validation](#about-regular-expression-validation).
+
+ **Example**:
+
+ ```yaml
+ - name: smtp_host
+ title: SMTP Hostname
+ help_text: Set SMTP Hostname
+ type: text
+ validation:
+ regex:
+ pattern: ^[a-zA-Z]([a-zA-Z0-9\-]+[\.]?)*[a-zA-Z0-9]$
+ message: Valid hostname starts with a letter (uppercase/lowercase), followed by zero or more groups of letters (uppercase/lowercase), digits, or hyphens, optionally followed by a period. Ends with a letter or digit.
+ ```
+3. (Optional) Mark fields as required by including `required: true`. When there are required fields, the user is prevented from proceeding with the installation until they provide a valid value for required fields.
+
+ **Example**:
+
+ ```yaml
+ - name: smtp_password
+ title: SMTP Password
+ type: password
+ required: true
+ ```
+
+4. Save and promote the release to a development environment to test your changes.
+
+## Next Steps
+
+After you add user input fields to the configuration screen, you use template functions to map the user-supplied values to manifest files in your release. If you use a Helm chart for your application, you map the values to the Helm chart `values.yaml` file using the HelmChart custom resource.
+
+For more information, see [Mapping User-Supplied Values](config-screen-map-inputs).
\ No newline at end of file
diff --git a/docs/reference/admin-console-display-app-status.md b/docs/reference/admin-console-display-app-status.md
new file mode 100644
index 0000000000..2733c248b2
--- /dev/null
+++ b/docs/reference/admin-console-display-app-status.md
@@ -0,0 +1,88 @@
+import StatusesTable from "../partials/status-informers/_statusesTable.mdx"
+import AggregateStatus from "../partials/status-informers/_aggregateStatus.mdx"
+import AggregateStatusIntro from "../partials/status-informers/_aggregate-status-intro.mdx"
+import SupportedResources from "../partials/instance-insights/_supported-resources-status.mdx"
+
+# Adding Resource Status Informers
+
+This topic describes how to add status informers for your application. Status informers apply only to applications installed with Replicated KOTS. For information about how to collect application status data for applications installed with Helm, see [Enabling and Understanding Application Status](insights-app-status).
+
+## About Status Informers
+
+_Status informers_ are a feature of KOTS that report on the status of supported Kubernetes resources deployed as part of your application. You enable status informers by listing the target resources under the `statusInformers` property in the Replicated Application custom resource. KOTS watches all of the resources that you add to the `statusInformers` property for changes in state.
+
+Possible resource statuses are Ready, Updating, Degraded, Unavailable, and Missing. For more information, see [Understanding Application Status](#understanding-application-status).
+
+When you one or more status informers to your application, KOTS automatically does the following:
+
+* Displays application status for your users on the dashboard of the Admin Console. This can help users diagnose and troubleshoot problems with their instance. The following shows an example of how an Unavailable status displays on the Admin Console dashboard:
+
+
+
+* Sends application status data to the Vendor Portal. This is useful for viewing insights on instances of your application running in customer environments, such as the current status and the average uptime. For more information, see [Instance Details](instance-insights-details).
+
+ The following shows an example of the Vendor Portal **Instance details** page with data about the status of an instance over time:
+
+
+
+ [View a larger version of this image](/images/instance-details.png)
+## Add Status Informers
+
+To create status informers for your application, add one or more supported resource types to the `statusInformers` property in the Application custom resource. See [`statusInformers`](/reference/custom-resource-application#statusinformers) in _Application_.
+
+
+
+ [View a larger version of this image](/images/gitea-open-app.png)
+
+## Access Port-Forwarded Services
+
+This section describes how to access port-forwarded services.
+
+### Command Line
+
+Run [`kubectl kots admin-console`](/reference/kots-cli-admin-console-index) to open the KOTS port forward tunnel.
+
+The `kots admin-console` command runs the equivalent of `kubectl port-forward svc/myapplication-service
+
+[View a larger version of this image](/images/gitea-open-app.png)
+
+## Examples
+
+This section provides examples of how to configure the `ports` key to port-forward a service in existing cluster installations and add links to services on the Admin Console dashboard.
+
+### Example: Bitnami Gitea Helm Chart with LoadBalancer Service
+
+This example uses a KOTS Application custom resource and a Kubernetes SIG Application custom resource to configure port forwarding for the Bitnami Gitea Helm chart in existing cluster installations, and add a link to the port-forwarded service on the Admin Console dashboard. To view the Gitea Helm chart source, see [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea) in GitHub.
+
+To test this example:
+
+1. Pull version 1.0.6 of the Gitea Helm chart from Bitnami:
+
+ ```
+ helm pull oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
+ ```
+
+1. Add the `gitea-1.0.6.tgz` chart archive to a new, empty release in the Vendor Portal along with the `kots-app.yaml`, `k8s-app.yaml`, and `gitea.yaml` files provided below. Promote to the channel that you use for internal testing. For more information, see [Managing Releases with the Vendor Portal](releases-creating-releases).
+
+ Based on the templates/svc.yaml and values.yaml files in the Gitea Helm chart, the following KOTS Application custom resource adds port 3000 to the port forward tunnel and maps local port 8888. Port 3000 is the container port of the Pod where the gitea service runs.
The Kubernetes Application custom resource lists the same URL as the `ports.applicationUrl` field in the KOTS Application custom resource (`"http://nginx"`). This adds a link to the port-forwarded service from the Admin Console dashboard. It also triggers KOTS to rewrite the URL to use the hostname in the browser and append the specified `localPort`. The label to be used for the link in the Admin Console is "Open App".
+The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The YAML below contains ClusterIP and NodePort specifications for a service named nginx. Each specification uses the kots.io/when annotation with the Replicated IsKurl template function to conditionally include the service based on the installation type (existing cluster or kURL cluster). For more information, see Conditionally Including or Excluding Resources and IsKurl.
As shown below, both the ClusterIP and NodePort nginx services are exposed on port 80.
A basic Deployment specification for the NGINX application.
+The KOTS Application custom resource below adds port 80 to the KOTS port forward tunnel and maps port 8888 on the local machine. The specification also includes applicationUrl: "http://nginx" so that a link to the service can be added to the Admin Console dashboard.
The Kubernetes Application custom resource lists the same URL as the `ports.applicationUrl` field in the KOTS Application custom resource (`"http://nginx"`). This adds a link to the port-forwarded service on the Admin Console dashboard that uses the hostname in the browser and appends the specified `localPort`. The label to be used for the link in the Admin Console is "Open App".
+| GitHub Action | +When to Use | +Related Replicated CLI Commands | +
|---|---|---|
| archive-channel | +
+ In release workflows, a temporary channel is created to promote a release for testing. This action archives the temporary channel after tests complete. +See Archive the temporary channel and customer in Recommended CI/CD Workflows. + |
+ channel delete |
+
| archive-customer | +
+ In release workflows, a temporary customer is created so that a release can be installed for testing. This action archives the temporary customer after tests complete. +See Archive the temporary channel and customer in Recommended CI/CD Workflows. + |
+ N/A | +
| create-cluster | +
+ In release workflows, use this action to create one or more clusters for testing. +See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. + |
+ cluster create |
+
| create-release | +
+ In release workflows, use this action to create a release to be installed and tested, and optionally to be promoted to a shared channel after tests complete. +See Create a release and promote to a temporary channel in Recommended CI/CD Workflows. + |
+ release create |
+
| get-customer-instances | +
+ In release workflows, use this action to create a matrix of clusters for running tests based on the Kubernetes distributions and versions of active instances of your application running in customer environments. +See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. + |
+ N/A | +
| helm-install | +
+ In development or release workflows, use this action to install a release using the Helm CLI in one or more clusters for testing. +See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. + |
+ N/A | +
| kots-install | +
+ In development or release workflows, use this action to install a release with Replicated KOTS in one or more clusters for testing. +See Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. + |
+ N/A | +
| prepare-cluster | +
+ In development workflows, use this action to create a cluster, create a temporary customer of type See Prepare clusters, deploy, and test in Recommended CI/CD Workflows. + |
+ cluster prepare |
+
| promote-release | +
+ In release workflows, use this action to promote a release to an internal or customer-facing channel (such as Unstable, Beta, or Stable) after tests pass. +See Promote to a shared channel in Recommended CI/CD Workflows. + |
+ release promote |
+
| remove-cluster | +
+ In development or release workflows, use this action to remove a cluster after running tests if no See Prepare clusters, deploy, and test and Create cluster matrix, deploy, and test in Recommended CI/CD Workflows. + |
+ cluster rm |
+
| report-compatibility-result | +In development or release workflows, use this action to report the success or failure of tests that ran in clusters provisioned by the Compatibility Matrix. | +release compatibility |
+
| upgrade-cluster | +In release workflows, use this action to test your application's compatibility with Kubernetes API resource version migrations after upgrading. | +cluster upgrade |
+
| Required Field | -Allowed Values | -Allowed Special Characters | -
|---|---|---|
| Minute | -0 through 59 | -, - * | -
| Hour | -0 through 23 | -, - * | -
| Day-of-month | -1 through 31 | -, - * ? | -
| Month | -1 through 12 or JAN through DEC | -, - * | -
| Day-of-week | -1 through 7 or SUN through SAT | -, - * ? | -
| Special Character | -Description | -
|---|---|
| Comma (,) | -Specifies a list or multiple values, which can be consecutive or not. For example, 1,2,4 in the Day-of-week field signifies every Monday, Tuesday, and Thursday. |
-
| Dash (-) | -Specifies a contiguous range. For example, 4-6 in the Month field signifies April through June. |
-
| Asterisk (*) | -Specifies that all of the values for the field are used. For example, using * in the Month field means that all of the months are included in the schedule. |
-
| Question mark (?) | - Specifies that one or another value can be used. For example, enter 5 for Day-of-the-month and ? for Day-of-the-week to check for updates on the 5th day of the month, regardless of which day of the week it is. |
-
| Schedule Value | -Description | -Equivalent Cron Expression | -
|---|---|---|
| @yearly (or @annually) | -Runs once a year, at midnight on January 1. | -0 0 1 1 * | -
| @monthly | -Runs once a month, at midnight on the first of the month. | -0 0 1 * * | -
| @weekly | -Run once a week, at midnight on Saturday. | -0 0 * * 0 | -
| @daily (or @midnight) | -Runs once a day, at midnight. | -0 0 * * * | -
| @hourly | -Runs once an hour, at the beginning of the hour. | -0 * * * * | -
| @never | -Disables the schedule completely. Only used by KOTS. This value can be useful when you are calling the API directly or are editing the KOTS configuration manually. |
- 0 * * * * | -
| @default | -Selects the default schedule option (every 4 hours). Begins when the Admin Console starts up. This value can be useful when you are calling the API directly or are editing the KOTS configuration manually. |
- 0 * * * * | -
+
+ [View a larger version of this image](/images/custom-domains-download-configure.png)
+
+1. For **Domain**, enter the custom domain. Click **Save & continue**.
+
+1. For **Create CNAME**, copy the text string and use it to create a CNAME record in your DNS account. Click **Continue**.
+
+1. For **Verify ownership**, copy the text string and use it to create a TXT record in your DNS account if displayed. If a TXT record is not displayed, ownership will be validated automatically using an HTTP token. Click **Validate & continue**.
+
+ Your changes can take up to 24 hours to propagate.
+
+1. For **TLS cert creation verification**, copy the text string and use it to create a TXT record in your DNS account if displayed. If a TXT record is not displayed, ownership will be validated automatically using an HTTP token. Click **Validate & continue**.
+
+ Your changes can take up to 24 hours to propagate.
+
+ :::important
+ If you set up a [CAA record](https://letsencrypt.org/docs/caa/) for this hostname, it might prevent TLS certificate renewal in the future. This can result in downtime for your customers.
+ :::
+
+1. For **Use Domain**, to set the new domain as the default, click **Yes, set as default**. Otherwise, click **Not now**.
+
+ :::note
+ Replicated recommends that you do _not_ set a domain as the default until you are ready for it to be used by customers.
+ :::
+
+The Vendor Portal marks the domain as **Configured** after the verification checks for ownership and TLS certificate creation are complete.
+
+## Use Custom Domains
+
+After you configure one or more custom domains in the Vendor Portal, you assign a custom domain by setting it as the default for all channels and customers or by assigning it to an individual release channel.
+
+### Set a Default Domain
+
+Setting a default domain is useful for ensuring that the same domain is used across channels for all your customers.
+
+When you set a custom domain as the default, it is used by default for all new releases promoted to any channel, as long as the channel does not have a different domain assigned in its channel settings.
+
+Only releases that are promoted to a channel _after_ you set a default domain use the new default domain. Any existing releases that were promoted before you set the default continue to use the same domain that they used previously.
+
+To set a custom domain as the default:
+
+1. In the Vendor Portal, go to **Custom Domains**.
+
+1. Next to the target domain, click **Set as default**.
+
+1. In the confirmation dialog that opens, click **Yes, set as default**.
+
+### Assign a Domain to a Channel {#channel-domain}
+
+You can assign a domain to an individual channel by editing the channel settings. When you specify a domain in the channel settings, new releases promoted to the channel use the selected domain even if there is a different domain set as the default on the **Custom Domains** page.
+
+Assigning a domain to a release channel is useful when you need to override either the default Replicated domain or a default custom domain for a specific channel. For example:
+* You need to use a different domain for releases promoted to your Beta and Stable channels.
+* You need to test a domain in a development environment before you set the domain as the default for all channels.
+
+To assign a custom domain to a channel:
+
+1. In the Vendor Portal, go to **Channels** and click the settings icon for the target channel.
+
+1. Under **Custom domains**, in the drop-down for the target Replicated endpoint, select the domain to use for the channel. For more information about channel settings, see [Settings](releases-about#settings) in _About Channels and Releases_.
+
+
+
+ [View a larger version of this image](/images/channel-settings.png)
+
+## Reuse a Custom Domain for Another Application
+
+If you have configured a custom domain for one application, you can reuse the custom domain for another application in the same team without going through the ownership and TLS certificate verification process again.
+
+To reuse a custom domain for another application:
+
+1. In the Vendor Portal, select the application from the dropdown list.
+
+1. Click **Custom Domains**.
+
+1. In the section for the target endpoint, click Add your first custom domain for your first domain, or click **Add new domain** for additional domains.
+
+ The **Configure a custom domain** wizard opens.
+
+1. In the text box, enter the custom domain name that you want to reuse. Click **Save & continue**.
+
+ The last page of the wizard opens because the custom domain was verified previously.
+
+1. Do one of the following:
+
+ - Click **Set as default**. In the confirmation dialog that opens, click **Yes, set as default**.
+
+ - Click **Not now**. You can come back later to set the domain as the default. The Vendor Portal shows shows that the domain has a Configured status because it was configured for a previous application, though it is not yet assigned as the default for this application.
+
+
+## Remove a Custom Domain
+
+You can remove a custom domain at any time, but you should plan the transition so that you do not break any existing installations or documentation.
+
+Removing a custom domain for the Replicated registry, proxy registry, or Replicated app service will break existing installations that use the custom domain. Existing installations need to be upgraded to a version that does not use the custom domain before it can be removed safely.
+
+If you remove a custom domain for the download portal, it is no longer accessible using the custom URL. You will need to point customers to an updated URL.
+
+To remove a custom domain:
+
+1. Log in to the [Vendor Portal](https://vendor.replicated.com) and click **Custom Domains**.
+
+1. Verify that the domain is not set as the default nor in use on any channels. You can edit the domains in use on a channel in the channel settings. For more information, see [Settings](releases-about#settings) in _About Channels and Releases_.
+
+ :::important
+ When you remove a registry or Replicated app service custom domain, any installations that reference that custom domain will break. Ensure that the custom domain is no longer in use before you remove it from the Vendor Portal.
+ :::
+
+1. Click **Remove** next to the unused domain in the list, and then click **Yes, remove domain**.
diff --git a/docs/reference/custom-domains.md b/docs/reference/custom-domains.md
new file mode 100644
index 0000000000..f6a93528fe
--- /dev/null
+++ b/docs/reference/custom-domains.md
@@ -0,0 +1,40 @@
+# About Custom Domains
+
+This topic provides an overview and the limitations of using custom domains to alias the Replicated private registry, Replicated proxy registry, Replicated app service, and the Download Portal.
+
+For information about configuring and managing custom domains, see [Using Custom Domains](custom-domains-using).
+
+## Overview
+
+You can use custom domains to alias Replicated endpoints by creating Canonical Name (CNAME) records for your domains.
+
+Replicated domains are external to your domain and can require additional security reviews by your customer. Using custom domains as aliases can bring the domains inside an existing security review and reduce your exposure.
+
+TXT records must be created to verify:
+
+- Domain ownership: Domain ownership is verified when you initially add a record.
+- TLS certificate creation: Each new domain must have a new TLS certificate to be verified.
+
+The TXT records can be removed after the verification is complete.
+
+You can configure custom domains for the following services, so that customer-facing URLs reflect your company's brand:
+
+- **Replicated registry:** Images and Helm charts can be pulled from the Replicated registry. By default, this registry uses the domain `registry.replicated.com`. We suggest using a CNAME such as `registry.{your app name}.com`.
+
+- **Proxy registry:** Images can be proxied from external private registries using the Replicated proxy registry. By default, the proxy registry uses the domain `proxy.replicated.com`. We suggest using a CNAME such as `proxy.{your app name}.com`.
+
+- **Replicated app service:** Upstream application YAML and metadata, including a license ID, are pulled from replicated.app. By default, this service uses the domain `replicated.app`. We suggest using a CNAME such as `updates.{your app name}.com`.
+
+- **Download Portal:** The Download Portal can be used to share customer license files, air gap bundles, and so on. By default, the Download Portal uses the domain `get.replicated.com`. We suggest using a CNAME such as `portal.{your app name}.com` or `enterprise.{your app name}.com`.
+
+## Limitations
+
+Using custom domains has the following limitations:
+
+- A single custom domain cannot be used for multiple endpoints. For example, a single domain can map to `registry.replicated.com` for any number of applications, but cannot map to both `registry.replicated.com` and `proxy.replicated.com`, even if the applications are different.
+
+- Custom domains cannot be used to alias api.replicated.com (legacy customer-facing APIs) or kURL.
+
+- Multiple custom domains can be configured, but only one custom domain can be the default for each Replicated endpoint. All configured custom domains work whether or not they are the default.
+
+- A particular custom domain can only be used by one team.
diff --git a/docs/reference/custom-metrics.md b/docs/reference/custom-metrics.md
new file mode 100644
index 0000000000..aa345c82b9
--- /dev/null
+++ b/docs/reference/custom-metrics.md
@@ -0,0 +1,201 @@
+# Configuring Custom Metrics (Beta)
+
+This topic describes how to configure an application to send custom metrics to the Replicated Vendor Portal.
+
+## Overview
+
+In addition to the built-in insights displayed in the Vendor Portal by default (such as uptime and time to install), you can also configure custom metrics to measure instances of your application running customer environments. Custom metrics can be collected for application instances running in online or air gap environments.
+
+Custom metrics can be used to generate insights on customer usage and adoption of new features, which can help your team to make more informed prioritization decisions. For example:
+* Decreased or plateaued usage for a customer can indicate a potential churn risk
+* Increased usage for a customer can indicate the opportunity to invest in growth, co-marketing, and upsell efforts
+* Low feature usage and adoption overall can indicate the need to invest in usability, discoverability, documentation, education, or in-product onboarding
+* High usage volume for a customer can indicate that the customer might need help in scaling their instance infrastructure to keep up with projected usage
+
+## How the Vendor Portal Collects Custom Metrics
+
+The Vendor Portal collects custom metrics through the Replicated SDK that is installed in the cluster alongside the application.
+
+The SDK exposes an in-cluster API where you can configure your application to POST metric payloads. When an application instance sends data to the API, the SDK sends the data (including any custom and built-in metrics) to the Replicated app service. The app service is located at `replicated.app` or at your custom domain.
+
+If any values in the metric payload are different from the current values for the instance, then a new event is generated and displayed in the Vendor Portal. For more information about how the Vendor Portal generates events, see [How the Vendor Portal Generates Events and Insights](/vendor/instance-insights-event-data#about-events) in _About Instance and Event Data_.
+
+The following diagram demonstrates how a custom `activeUsers` metric is sent to the in-cluster API and ultimately displayed in the Vendor Portal, as described above:
+
+
+
+[View a larger version of this image](/images/custom-metrics-flow.png)
+
+## Requirements
+
+To support the collection of custom metrics in online and air gap environments, the Replicated SDK version 1.0.0-beta.12 or later must be running in the cluster alongside the application instance.
+
+The `PATCH` and `DELETE` methods described below are available in the Replicated SDK version 1.0.0-beta.23 or later.
+
+For more information about the Replicated SDK, see [About the Replicated SDK](/vendor/replicated-sdk-overview).
+
+If you have any customers running earlier versions of the SDK, Replicated recommends that you add logic to your application to gracefully handle a 404 from the in-cluster APIs.
+
+## Limitations
+
+Custom metrics have the following limitations:
+
+* The label that is used to display metrics in the Vendor Portal cannot be customized. Metrics are sent to the Vendor Portal with the same name that is sent in the `POST` or `PATCH` payload. The Vendor Portal then converts camel case to title case: for example, `activeUsers` is displayed as **Active Users**.
+
+* The in-cluster APIs accept only JSON scalar values for metrics. Any requests containing nested objects or arrays are rejected.
+
+* When using the `POST` method any existing keys that are not included in the payload will be deleted. To create new metrics or update existing ones without sending the entire dataset, simply use the `PATCH` method.
+
+## Configure Custom Metrics
+
+You can configure your application to `POST` or `PATCH` a set of metrics as key value pairs to the API that is running in the cluster alongside the application instance.
+
+To remove an existing custom metric use the `DELETE` endpoint with the custom metric name.
+
+The Replicated SDK provides an in-cluster API custom metrics endpoint at `http://replicated:3000/api/v1/app/custom-metrics`.
+
+**Example:**
+
+```bash
+POST http://replicated:3000/api/v1/app/custom-metrics
+```
+
+```json
+{
+ "data": {
+ "num_projects": 5,
+ "weekly_active_users": 10
+ }
+}
+```
+
+```bash
+PATCH http://replicated:3000/api/v1/app/custom-metrics
+```
+
+```json
+{
+ "data": {
+ "num_projects": 54,
+ "num_error": 2
+ }
+}
+```
+
+```bash
+DELETE http://replicated:3000/api/v1/app/custom-metrics/num_projects
+```
+
+### POST vs PATCH
+
+The `POST` method will always replace the existing data with the most recent payload received. Any existing keys not included in the most recent payload will still be accessible in the instance events API, but they will no longer appear in the instance summary.
+
+The `PATCH` method will accept partial updates or add new custom metrics if a key:value pair that does not currently exist is passed.
+
+In most cases, simply using the `PATCH` method is recommended.
+
+For example, if a component of your application sends the following via the `POST` method:
+
+```json
+{
+ "numProjects": 5,
+ "activeUsers": 10,
+}
+```
+
+Then, the component later sends the following also via the `POST` method:
+
+```json
+{
+ "activeUsers": 10,
+ "usingCustomReports": false
+}
+```
+
+The instance detail will show `Active Users: 10` and `Using Custom Reports: false`, which represents the most recent payload received. The previously-sent `numProjects` value is discarded from the instance summary and is available in the instance events payload. In order to preseve `numProjects`from the initial payload and upsert `usingCustomReports` and `activeUsers` use the `PATCH` method instead of `POST` on subsequent calls to the endpoint.
+
+For example, if a component of your application initially sends the following via the `POST` method:
+
+```json
+{
+ "numProjects": 5,
+ "activeUsers": 10,
+}
+```
+
+Then, the component later sends the following also via the `PATCH` method:
+```json
+{
+ "usingCustomReports": false
+}
+```
+
+The instance detail will show `Num Projects: 5`, `Active Users: 10`, `Using Custom Reports: false`, which represents the merged and upserted payload.
+
+### NodeJS Example
+
+The following example shows a NodeJS application that sends metrics on a weekly interval to the in-cluster API exposed by the SDK:
+
+```javascript
+async function sendMetrics(db) {
+
+ const projectsQuery = "SELECT COUNT(*) as num_projects from projects";
+ const numProjects = (await db.getConnection().queryOne(projectsQuery)).num_projects;
+
+ const usersQuery =
+ "SELECT COUNT(*) as active_users from users where DATEDIFF('day', last_active, CURRENT_TIMESTAMP) < 7";
+ const activeUsers = (await db.getConnection().queryOne(usersQuery)).active_users;
+
+ const metrics = { data: { numProjects, activeUsers }};
+
+ const res = await fetch('https://replicated:3000/api/v1/app/custom-metrics', {
+ method: 'POST',
+ headers: {
+ "Content-Type": "application/json",
+ },
+ body: JSON.stringify(metrics),
+ });
+ if (res.status !== 200) {
+ throw new Error(`Failed to send metrics: ${res.statusText}`);
+ }
+}
+
+async function startMetricsLoop(db) {
+
+ const ONE_DAY_IN_MS = 1000 * 60 * 60 * 24
+
+ // send metrics once on startup
+ await sendMetrics(db)
+ .catch((e) => { console.log("error sending metrics: ", e) });
+
+ // schedule weekly metrics payload
+
+ setInterval( () => {
+ sendMetrics(db, licenseId)
+ .catch((e) => { console.log("error sending metrics: ", e) });
+ }, ONE_DAY_IN_MS);
+}
+
+startMetricsLoop(getDatabase());
+```
+
+## View Custom Metrics
+
+You can view the custom metrics that you configure for each active instance of your application on the **Instance Details** page in the Vendor Portal.
+
+The following shows an example of an instance with custom metrics:
+
+
+
+[View a larger version of this image](/images/instance-custom-metrics.png)
+
+As shown in the image above, the **Custom Metrics** section of the **Instance Details** page includes the following information:
+* The timestamp when the custom metric data was last updated.
+* Each custom metric that you configured, along with the most recent value for the metric.
+* A time-series graph depicting the historical data trends for the selected metric.
+
+Custom metrics are also included in the **Instance activity** stream of the **Instance Details** page. For more information, see [Instance Activity](/vendor/instance-insights-details#instance-activity) in _Instance Details_.
+
+## Export Custom Metrics
+
+You can use the Vendor API v3 `/app/{app_id}/events` endpoint to programatically access historical timeseries data containing instance level events, including any custom metrics that you have defined. For more information about the endpoint, see [Export Customer and Instance Data](/vendor/instance-data-export).
diff --git a/docs/reference/custom-resource-about.md b/docs/reference/custom-resource-about.md
deleted file mode 100644
index 09778ba712..0000000000
--- a/docs/reference/custom-resource-about.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# About Custom Resources
-
-You can include custom resources in releases to control the experience for applications installed with Replicated KOTS.
-
-Custom resources are consumed by KOTS, the Admin Console, or by other kubectl plugins. Custom resources are packaged as part of the application, but are _not_ deployed to the cluster.
-
-## KOTS Custom Resources
-
-The following are custom resources in the `kots.io` API group:
-
-| API Group/Version | Kind | Description |
-|---------------|------|-------------|
-| kots.io/v1beta1 | [Application](custom-resource-application) | Adds additional metadata (branding, release notes and more) to an application |
-| kots.io/v1beta1 | [Config](custom-resource-config)| Defines a user-facing configuration screen in the Admin Console |
-| kots.io/v1beta2 | [HelmChart](custom-resource-helmchart-v2) | Identifies an instantiation of a Helm Chart |
-| kots.io/v1beta1 | [LintConfig](custom-resource-lintconfig) | Customizes the default rule levels for the KOTS release linter |
-
-## Other Custom Resources
-
-The following are custom resources in API groups other than `kots.io` that can be included in a KOTS release to configure additional functionality:
-
-| API Group/Version | Kind | Description |
-|---------------|------|-------------|
-| app.k8s.io/v1beta1 | [SIG Application](https://github.com/kubernetes-sigs/application#kubernetes-applications) | Defines metadata about the application |
-| cluster.kurl.sh/v1beta1 | [Installer](https://kurl.sh/docs/create-installer/) | Defines a Replicated kURL distribution |
-| embeddedcluster.replicated.com/v1beta1 | [Config](/reference/embedded-config) | Defines a Replicated Embedded Cluster distribution |
-| troubleshoot.replicated.com/v1beta2 | [Preflight](custom-resource-preflight) | Defines the data to collect and analyze for custom preflight checks |
-| troubleshoot.replicated.com/v1beta2 | [Redactor](https://troubleshoot.sh/reference/redactors/overview/) | Defines custom redactors that apply to support bundles and preflight checks |
-| troubleshoot.sh/v1beta2 | [Support Bundle](custom-resource-preflight) | Defines the data to collect and analyze for a support bundle |
-| velero.io/v1 | [Backup](https://velero.io/docs/v1.10/api-types/backup/) | A Velero backup request, triggered when the user initiates a backup with Replicated snapshots |
-
diff --git a/docs/reference/custom-resource-application.mdx b/docs/reference/custom-resource-application.mdx
deleted file mode 100644
index 82596df65f..0000000000
--- a/docs/reference/custom-resource-application.mdx
+++ /dev/null
@@ -1,419 +0,0 @@
-import Title from "../partials/custom-resource-application/_title.mdx"
-import Icon from "../partials/custom-resource-application/_icon.mdx"
-import ReleaseNotes from "../partials/custom-resource-application/_releaseNotes.mdx"
-import AllowRollback from "../partials/custom-resource-application/_allowRollback.mdx"
-import AdditionalNamespaces from "../partials/custom-resource-application/_additionalNamespaces.mdx"
-import AdditionalImages from "../partials/custom-resource-application/_additionalImages.mdx"
-import RequireMinimalRBACPrivileges from "../partials/custom-resource-application/_requireMinimalRBACPrivileges.mdx"
-import SupportMinimalRBACPrivileges from "../partials/custom-resource-application/_supportMinimalRBACPrivileges.mdx"
-import Ports from "../partials/custom-resource-application/_ports.mdx"
-import StatusInformers from "../partials/custom-resource-application/_statusInformers.mdx"
-import Graphs from "../partials/custom-resource-application/_graphs.mdx"
-import GraphsTemplates from "../partials/custom-resource-application/_graphs-templates.mdx"
-import TargetKotsVersion from "../partials/custom-resource-application/_targetKotsVersion.mdx"
-import MinKotsVersion from "../partials/custom-resource-application/_minKotsVersion.mdx"
-import ProxyRegistryDomain from "../partials/custom-resource-application/_proxyRegistryDomain.mdx"
-import ReplicatedRegistryDomain from "../partials/custom-resource-application/_replicatedRegistryDomain.mdx"
-import ServicePortNote from "../partials/custom-resource-application/_servicePort-note.mdx"
-import PortsServiceName from "../partials/custom-resource-application/_ports-serviceName.mdx"
-import PortsLocalPort from "../partials/custom-resource-application/_ports-localPort.mdx"
-import PortsServicePort from "../partials/custom-resource-application/_ports-servicePort.mdx"
-import PortsApplicationURL from "../partials/custom-resource-application/_ports-applicationURL.mdx"
-import KurlNote from "../partials/custom-resource-application/_ports-kurl-note.mdx"
-
-# Application
-
-The Application custom resource enables features such as branding, release notes, port forwarding, dashboard buttons, app status indicators, and custom graphs.
-
-There is some overlap between the Application custom resource manifest file and the [Kubernetes SIG Application custom resource](https://github.com/kubernetes-sigs/application/blob/master/docs/api.md). For example, enabling features such as [adding a button to the dashboard](/vendor/admin-console-adding-buttons-links) requires the use of both the Application and SIG Application custom resources.
-
-The following is an example manifest file for the Application custom resource:
-
-```yaml
-apiVersion: kots.io/v1beta1
-kind: Application
-metadata:
- name: my-application
-spec:
- title: My Application
- icon: https://support.io/img/logo.png
- releaseNotes: These are our release notes
- allowRollback: false
- targetKotsVersion: "1.60.0"
- minKotsVersion: "1.40.0"
- requireMinimalRBACPrivileges: false
- additionalImages:
- - jenkins/jenkins:lts
- additionalNamespaces:
- - "*"
- ports:
- - serviceName: web
- servicePort: 9000
- localPort: 9000
- applicationUrl: "http://web"
- statusInformers:
- - deployment/my-web-svc
- - deployment/my-worker
- graphs:
- - title: User Signups
- query: 'sum(user_signup_events_total)'
-```
-
-## title
-
-| Description | -The application title. Used on the license upload and in various places in the Replicated Admin Console. | -
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -Yes | -
| Description | -The icon file for the application. Used on the license upload, in various places in the Admin Console, and in the Download Portal. The icon can be a remote URL or a Base64 encoded image. Base64 encoded images are required to display the image in air gap installations with no outbound internet access. | -
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -Yes | -
| Description | -The release notes for this version. These can also be set when promoting a release. | -
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -Yes | -
| Description | -
- Enable this flag to create a Rollback button on the Admin Console Version History page. -If an application is guaranteed not to introduce backwards-incompatible versions, such as through database migrations, then the Rollback does not revert any state. Rather, it recovers the YAML manifests that are applied to the cluster. - |
-
|---|---|
| Example | -|
| Default | -false |
-
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -Embedded Cluster 1.17.0 and later supports partial rollbacks of the application version. Partial rollbacks are supported only when rolling back to a version where there is no change to the [Embedded Cluster Config](/reference/embedded-config) compared to the currently-installed version. For example, users can roll back to release version 1.0.0 after upgrading to 1.1.0 only if both 1.0.0 and 1.1.0 use the same Embedded Cluster Config. | -
| Description | -
- An array of additional namespaces as strings that Replicated KOTS creates on the cluster. For more information, see Defining Additional Namespaces. -In addition to creating the additional namespaces, KOTS ensures that the application secret exists in the namespaces. KOTS also ensures that this application secret has access to pull the application images, including both images that are used and any images you add in the For dynamically created namespaces, specify |
-
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -Yes | -
| Description | -An array of strings that reference images to be included in air gap bundles and pushed to the local registry during installation. KOTS detects images from the PodSpecs in the application. Some applications, such as Operators, might need to include additional images that are not referenced until runtime. For more information, see Defining Additional Images. |
-
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -Yes | -
| Description | -
Requires minimal role-based access control (RBAC) be used for all customer installations. When set to For additional requirements and limitations related to using namespace-scoped RBAC, see About Namespace-scoped RBAC in Configuring KOTS RBAC. |
-
|---|---|
| Example | -|
| Default | -false |
-
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -No | -
| Description | -
Allows minimal role-based access control (RBAC) be used for all customer installations. When set to Minimal RBAC is not used by default. It is only used when the For additional requirements and limitations related to using namespace-scoped RBAC, see About Namespace-scoped RBAC in Configuring KOTS RBAC. |
-
|---|---|
| Example | -|
| Default | -false |
-
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -No | -
| Description | -
- Extra ports (additional to the The
|
-
|---|---|
| Example | -|
| Supports Go templates? | -Go templates are supported in the `serviceName` and `applicationUrl` fields only. Using Go templates in the `localPort` or `servicePort` fields results in an installation error similar to the following: `json: cannot unmarshal string into Go struct field ApplicationPort.spec.ports.servicePort of type int`. |
-
| Supported for Embedded Cluster? | -Yes | -
| Description | -
- Resources to watch and report application status back to the user. When you include
For more information about including statusInformers, see Adding Resource Status Informers. - |
-
|---|---|
| Example | -|
| Supports Go templates? | -Yes | -
| Supported for Embedded Cluster? | -Yes | -
| Description | -Custom graphs to include on the Admin Console application dashboard.For more information about how to create custom graphs, see Adding Custom Graphs.
|
-
|---|---|
| Example | -|
| Supports Go templates? | -
- Yes - |
-
| Supported for Embedded Cluster? | -No | -
| Description | -The custom domain used for proxy.replicated.com. For more information, see Using Custom Domains. Introduced in KOTS v1.91.1. |
-
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Description | -The custom domain used for registry.replicated.com. For more information, see Using Custom Domains. Introduced in KOTS v1.91.1. |
-
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -Yes | -
| Description | -The KOTS version that is targeted by the release. For more information, see Setting Minimum and Target Versions for KOTS. |
-
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -No. Setting targetKotsVersion to a version earlier than the KOTS version included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: Error: This version of App Name requires a different version of KOTS from what you currently have installed.. To avoid installation failures, do not use targetKotsVersion in releases that support installation with Embedded Cluster. |
-
| Description | -The minimum KOTS version that is required by the release. For more information, see Setting Minimum and Target Versions for KOTS. |
-
|---|---|
| Example | -|
| Supports Go templates? | -No | -
| Supported for Embedded Cluster? | -No. Setting minKotsVersion to a version later than the KOTS version included in the specified version of Embedded Cluster will cause Embedded Cluster installations to fail with an error message like: Error: This version of App Name requires a different version of KOTS from what you currently have installed.. To avoid installation failures, do not use minKotsVersion in releases that support installation with Embedded Cluster. |
-
| Field Name | -Description | -
|---|---|
includedNamespaces |
- (Optional) Specifies an array of namespaces to include in the backup. If unspecified, all namespaces are included. | -
excludedNamespaces |
- (Optional) Specifies an array of namespaces to exclude from the backup. | -
orderedResources |
- (Optional) Specifies the order of the resources to collect during the backup process. This is a map that uses a key as the plural resource. Each resource name has the format NAMESPACE/OBJECTNAME. The object names are a comma delimited list. For cluster resources, use OBJECTNAME only. | -
ttl |
- Specifies the amount of time before this backup is eligible for garbage collection. Default:720h (equivalent to 30 days). This value is configurable only by the customer. |
-
hooks |
- (Optional) Specifies the actions to perform at different times during a backup. The only supported hook is executing a command in a container in a pod (uses the pod exec API). Supports pre and post hooks. |
-
hooks.resources |
- (Optional) Specifies an array of hooks that are applied to specific resources. | -
hooks.resources.name |
- Specifies the name of the hook. This value displays in the backup log. | -
hooks.resources.includedNamespaces |
- (Optional) Specifies an array of namespaces that this hook applies to. If unspecified, the hook is applied to all namespaces. | -
hooks.resources.excludedNamespaces |
- (Optional) Specifies an array of namespaces to which this hook does not apply. | -
hooks.resources.includedResources |
- Specifies an array of pod resources to which this hook applies. | -
hooks.resources.excludedResources |
- (Optional) Specifies an array of resources to which this hook does not apply. | -
hooks.resources.labelSelector |
- (Optional) Specifies that this hook only applies to objects that match this label selector. | -
hooks.resources.pre |
- Specifies an array of exec hooks to run before executing custom actions. |
-
hooks.resources.post |
- Specifies an array of exec hooks to run after executing custom actions. Supports the same arrays and fields as pre hooks. |
-
hooks.resources.[post/pre].exec |
- Specifies the type of the hook. exec is the only supported type. |
-
hooks.resources.[post/pre].exec.container |
- (Optional) Specifies the name of the container where the specified command will be executed. If unspecified, the first container in the pod is used. | -
hooks.resources.[post/pre].exec.command |
- Specifies the command to execute. The format is an array. | -
hooks.resources.[post/pre].exec.onError |
- (Optional) Specifies how to handle an error that might occur when executing the command. Valid values: Fail and Continue Default: Fail |
-
hooks.resources.[post/pre].exec.timeout |
- (Optional) Specifies how many seconds to wait for the command to finish executing before the action times out. Default: 30s |
-
| Description | -
- Items can be affixed Specify the |
-
|---|---|
| Required? | -No | -
| Example | -|
| Supports Go templates? | -Yes | -
| Description | -
- Defines the default value for the config item. If the user does not provide a value for the item, then the If the |
-
|---|---|
| Required? | -No | -
| Example | -|
| Supports Go templates? | -Yes. Every time the user makes a change to their configuration settings for the application, any template functions used in the |
-
| Description | -
- Displays a helpful message below the Markdown syntax is supported. For more information about markdown syntax, see Basic writing and formatting syntax in the GitHub Docs. - |
-
|---|---|
| Required? | -No | -
| Example | -|
| Supports Go templates? | -Yes | -
| Description | -
- Hidden items are not visible in the Admin Console. - |
-
|---|---|
| Required? | -No | -
| Example | -|
| Supports Go templates? | -No | -
| Description | -A unique identifier for the config item. Item names must be unique both within the group and across all groups. The item The item |
-
|---|---|
| Required? | -Yes | -
| Example | -|
| Supports Go templates? | -Yes | -
| Description | -
- Readonly items are displayed in the Admin Console and users cannot edit their value. - |
-
|---|---|
| Required? | -No | -
| Example | -|
| Supports Go templates? | -No | -
| Description | -Displays a Recommended tag for the config item in the Admin Console. |
-
|---|---|
| Required? | -No | -
| Example | -
- |
-
| Supports Go templates? | -No | -
| Description | -Displays a Required tag for the config item in the Admin Console. A required item prevents the application from starting until it has a value. |
-
|---|---|
| Required? | -No | -
| Example | -|
| Supports Go templates? | -No | -
| Description | -The title of the config item that displays in the Admin Console. |
-
|---|---|
| Required? | -Yes | -
| Example | -|
| Supports Go templates? | -Yes | -
| Description | -
- Each item has a The For information about each type, see Item Types. - |
-
|---|---|
| Required? | -Yes | -
| Example | -|
| Supports Go templates? | -No | -
| Description | -
- Defines the value of the config item. Data that you add to If the config item is not readonly, then the data that you add to |
-
|---|---|
| Required? | -No | -
| Example | -|
| Supports Go templates? | -Yes |
-
| Description | -The The `when` item property has the following requirements and limitations:
|
-
|---|---|
| Required? | -No | -
| Example | -
- Display the For additional examples, see Using Conditional Statements in Configuration Fields. - |
-
| Supports Go templates? | -Yes | -
| Description | -The |
-
|---|---|
| Required? | -No | -
| Example | -
- Validates and returns if |
-
| Supports Go templates? | -No | -
| Level | -Description | -
|---|---|
| error | -The rule is enabled and shows as an error. | -
| warn | -The rule is enabled and shows as a warning. | -
| info | -The rule is enabled and shows an informational message. | -
| off | -The rule is disabled. | -
| Field Name | -Description | -
|---|---|
collectorName |
- (Optional) A collector can specify the collectorName field. In some collectors, this field controls the path where result files are stored in the support bundle. |
-
exclude |
- (Optional) (KOTS Only) Based on the runtime available configuration, a conditional can be specified in the exclude field. This is useful for deployment techniques that allow templating for Replicated KOTS and the optional KOTS Helm component. When this value is true, the collector is not included. |
-
| Field Name | -Description | -
|---|---|
collectorName |
- (Optional) An analyzer can specify the collectorName field. |
-
exclude |
- (Optional) (KOTS Only) A condition based on the runtime available configuration can be specified in the exclude field. This is useful for deployment techniques that allow templating for KOTS and the optional KOTS Helm component. When this value is true, the analyzer is not included. |
-
strict |
- (Optional) (KOTS Only) An analyzer can be set to strict: true so that fail outcomes for that analyzer prevent the release from being deployed by KOTS until the vendor-specified requirements are met. When exclude: true is also specified, exclude overrides strict and the analyzer is not executed. |
-
| Field Name | -Description | -
|---|---|
file |
- (Optional) Specifies a single file for redaction. | -
files |
- (Optional) Specifies multiple files for redaction. | -
/my/test/glob/* matches /my/test/glob/file, but does not match /my/test/glob/subdir/file.
-
-### removals
-
-The `removals` object is required and defines the redactions that occur. This object supports the following fields. At least one of these fields must be specified:
-
-| Field Name | -Description | -
|---|---|
regex |
- (Optional) Allows a regular expression to be applied for removal and redaction on lines that immediately follow a line that matches a filter. The selector field is used to identify lines, and the redactor field specifies a regular expression that runs on the line after any line identified by selector. If selector is empty, the redactor runs on every line. Using a selector is useful for removing values from pretty-printed JSON, where the value to be redacted is pretty-printed on the line beneath another value.Matches to the regex are removed or redacted, depending on the construction of the regex. Any portion of a match not contained within a capturing group is removed entirely. The contents of capturing groups tagged mask are masked with ***HIDDEN***. Capturing groups tagged drop are dropped. |
-
values |
- (Optional) Specifies values to replace with the string ***HIDDEN***. |
-
yamlPath |
- (Optional) Specifies a .-delimited path to the items to be redacted from a YAML document. If an item in the path is the literal string *, the redactor is applied to all options at that level.Files that fail to parse as YAML or do not contain any matches are not modified. Files that do contain matches are re-rendered, which removes comments and custom formatting. Multi-document YAML is not fully supported. Only the first document is checked for matches, and if a match is found, later documents are discarded entirely. |
-
| Metric | +Description | +Target Trend | +
|---|---|---|
| Instances on last three versions | +
+ Percent of active instances that are running one the latest three versions of your application. +Formula: |
+ Increase towards 100% | +
| Unique versions | +
+ Number of unique versions of your application running in active instances. +Formula: |
+ Decrease towards less than or equal to three | +
| Median relative age | +
+ The relative age of a single instance is the number of days between the date that the instance's version was promoted to the channel and the date when the latest available application version was promoted to the channel. +Median relative age is the median value across all active instances for the selected time period and channel. +Formula: |
+ Depends on release cadence. For vendors who ship every four to eight weeks, decrease the median relative age towards 60 days or fewer. |
+
| Upgrades completed | +
+ Total number of completed upgrades across active instances for the selected time period and channel. +An upgrade is a single version change for an instance. An upgrade is considered complete when the instance deploys the new application version. +The instance does not need to become available (as indicated by reaching a Ready state) after deploying the new version for the upgrade to be counted as complete. +Formula: |
+ Increase compared to any previous period, unless you reduce your total number of live instances. | +
| Flag | -Description | -
|---|---|
| `--admin-console-password` | -
- Set the password for the Admin Console. The password must be at least six characters in length. If not set, the user is prompted to provide an Admin Console password. - |
-
| `--admin-console-port` | -
- Port on which to run the KOTS Admin Console. **Default**: By default, the Admin Console runs on port 30000. -**Limitation:** It is not possible to change the port for the Admin Console during a restore with Embedded Cluster. For more information, see [Disaster Recovery for Embedded Cluster (Alpha)](/vendor/embedded-disaster-recovery). - |
-
| `--airgap-bundle` | -The Embedded Cluster air gap bundle used for installations in air-gapped environments with no outbound internet access. For information about how to install in an air-gapped environment, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap). | -
| `--cidr` | -
- The range of IP addresses that can be assigned to Pods and Services, in CIDR notation. **Default:** By default, the CIDR block is `10.244.0.0/16`. -**Requirement**: Embedded Cluster 1.16.0 or later. - |
-
| `--config-values` | -
- Path to the ConfigValues file for the application. The ConfigValues file can be used to pass the application configuration values from the command line during installation, such as when performing automated installations as part of CI/CD pipelines. For more information, see [Installing with Embedded Cluster from the Command Line](/enterprise/installing-embedded-automation). -Requirement: Embedded Cluster 1.18.0 and later. - |
-
| `--data-dir` | -
- The data directory used by Embedded Cluster. **Default**: `/var/lib/embedded-cluster` -**Requirement**: Embedded Cluster 1.16.0 or later. -**Limitations:** -
|
-
| `--http-proxy` | -
- Proxy server to use for HTTP. - |
-
| `--https-proxy` | -
- Proxy server to use for HTTPS. - |
-
| `--local-artifact-mirror-port` | -
- Port on which to run the Local Artifact Mirror (LAM). **Default**: By default, the LAM runs on port 50000. - |
-
| `--network-interface` | -
- The name of the network interface to bind to for the Kubernetes API. A common use case of `--network-interface` is for multi-node clusters where node communication should happen on a particular network. **Default**: If a network interface is not provided, the first valid, non-local network interface is used. - |
-
| `--no-proxy` | -
- Comma-separated list of hosts for which not to use a proxy. -For single-node installations, pass the IP address of the node where you are installing. For multi-node installations, when deploying the first node, pass the list of IP addresses for all nodes in the cluster (typically in CIDR notation). The network interface's subnet will automatically be added to the no-proxy list if the node's IP address is not already included. -The following are never proxied: -
To ensure your application's internal cluster communication is not proxied, use fully qualified domain names like `my-service.my-namespace.svc` or `my-service.my-namespace.svc.cluster.local`. - |
-
| `--private-ca` | -
- The path to trusted certificate authority (CA) certificates. Using the `--private-ca` flag ensures that the CA is trusted by the installation. KOTS writes the CA certificates provided with the `--private-ca` flag to a ConfigMap in the cluster. -The KOTS [PrivateCACert](/reference/template-functions-static-context#privatecacert) template function returns the ConfigMap containing the private CA certificates supplied with the `--private-ca` flag. You can use this template function to mount the ConfigMap so your containers trust the CA too. - |
-
+
+ [View a larger version of this image](/images/ec-version-command.png)
+
+* Any Helm extensions included in the `extensions` field of the Embedded Cluster Config are _not_ included in backups. Helm extensions are reinstalled as part of the restore process. To include Helm extensions in backups, configure the Velero Backup resource to include the extensions using namespace-based or label-based selection. For more information, see [Configure the Velero Custom Resources](#config-velero-resources) below.
+
+* Users can only restore from the most recent backup.
+
+* Velero is installed only during the initial installation process. Enabling the disaster recovery license field for customers after they have already installed will not do anything.
+
+* If the `--admin-console-port` flag was used during install to change the port for the Admin Console, note that during a restore the Admin Console port will be used from the backup and cannot be changed. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
+
+## Configure Disaster Recovery
+
+This section describes how to configure disaster recovery for Embedded Cluster installations. It also describes how to enable access to the disaster recovery feature on a per-customer basis.
+
+### Configure the Velero Custom Resources {#config-velero-resources}
+
+This section describes how to set up Embedded Cluster disaster recovery for your application by configuring Velero [Backup](https://velero.io/docs/latest/api-types/backup/) and [Restore](https://velero.io/docs/latest/api-types/restore/) custom resources in a release.
+
+To configure Velero Backup and Restore custom resources for Embedded Cluster disaster recovery:
+
+1. In a new release containing your application files, add a Velero Backup resource. In the Backup resource, use namespace-based or label-based selection to indicate the application resources that you want to be included in the backup. For more information, see [Backup API Type](https://velero.io/docs/latest/api-types/backup/) in the Velero documentation.
+
+ :::important
+ If you use namespace-based selection to include all of your application resources deployed in the `kotsadm` namespace, ensure that you exclude the Replicated resources that are also deployed in the `kotsadm` namespace. Because the Embedded Cluster infrastructure components are always included in backups automatically, this avoids duplication.
+ :::
+
+ **Example:**
+
+ The following Backup resource uses namespace-based selection to include application resources deployed in the `kotsadm` namespace:
+
+ ```yaml
+ apiVersion: velero.io/v1
+ kind: Backup
+ metadata:
+ name: backup
+ spec:
+ # Back up the resources in the kotsadm namespace
+ includedNamespaces:
+ - kotsadm
+ orLabelSelectors:
+ - matchExpressions:
+ # Exclude Replicated resources from the backup
+ - { key: kots.io/kotsadm, operator: NotIn, values: ["true"] }
+ ```
+
+1. In the same release, add a Velero Restore resource. In the `backupName` field of the Restore resource, include the name of the Backup resource that you created. For more information, see [Restore API Type](https://velero.io/docs/latest/api-types/restore/) in the Velero documentation.
+
+ **Example**:
+
+ ```yaml
+ apiVersion: velero.io/v1
+ kind: Restore
+ metadata:
+ name: restore
+ spec:
+ # the name of the Backup resource that you created
+ backupName: backup
+ includedNamespaces:
+ - '*'
+ ```
+
+1. For any image names that you include in your Backup and Restore resources, rewrite the image name using the Replicated KOTS [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry), [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost), and [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) template functions. This ensures that the image name is rendered correctly during deployment, allowing the image to be pulled from the user's local image registry (such as in air gap installations) or through the Replicated proxy registry.
+
+ **Example:**
+
+ ```yaml
+ apiVersion: velero.io/v1
+ kind: Restore
+ metadata:
+ name: restore
+ spec:
+ hooks:
+ resources:
+ - name: restore-hook-1
+ includedNamespaces:
+ - kotsadm
+ labelSelector:
+ matchLabels:
+ app: example
+ postHooks:
+ - init:
+ initContainers:
+ - name: restore-hook-init1
+ image:
+ # Use HasLocalRegistry, LocalRegistryHost, and LocalRegistryNamespace
+ # to template the image name
+ registry: '{{repl HasLocalRegistry | ternary LocalRegistryHost "proxy.replicated.com" }}'
+ repository: '{{repl HasLocalRegistry | ternary LocalRegistryNamespace "proxy/my-app/quay.io/my-org" }}/nginx'
+ tag: 1.24-alpine
+ ```
+ For more information about how to rewrite image names using the KOTS [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry), [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost), and [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) template functions, including additional examples, see [Task 1: Rewrite Image Names](helm-native-v2-using#rewrite-image-names) in _Configuring the HelmChart v2 Custom Resource_.
+
+1. If you support air gap installations, add any images that are referenced in your Backup and Restore resources to the `additionalImages` field of the KOTS Application custom resource. This ensures that the images are included in the air gap bundle for the release so they can be used during the backup and restore process in environments with limited or no outbound internet access. For more information, see [additionalImages](/reference/custom-resource-application#additionalimages) in _Application_.
+
+ **Example:**
+
+ ```yaml
+ apiVersion: kots.io/v1beta1
+ kind: Application
+ metadata:
+ name: my-app
+ spec:
+ additionalImages:
+ - elasticsearch:7.6.0
+ - quay.io/orgname/private-image:v1.2.3
+ ```
+
+1. (Optional) Use Velero functionality like [backup](https://velero.io/docs/main/backup-hooks/) and [restore](https://velero.io/docs/main/restore-hooks/) hooks to customize the backup and restore process as needed.
+
+ **Example:**
+
+ For example, a Postgres database might be backed up using pg_dump to extract the database into a file as part of a backup hook. It can then be restored using the file in a restore hook:
+
+ ```yaml
+ podAnnotations:
+ backup.velero.io/backup-volumes: backup
+ pre.hook.backup.velero.io/command: '["/bin/bash", "-c", "PGPASSWORD=$POSTGRES_PASSWORD pg_dump -U {{repl ConfigOption "postgresql_username" }} -d {{repl ConfigOption "postgresql_database" }} -h 127.0.0.1 > /scratch/backup.sql"]'
+ pre.hook.backup.velero.io/timeout: 3m
+ post.hook.restore.velero.io/command: '["/bin/bash", "-c", "[ -f \"/scratch/backup.sql\" ] && PGPASSWORD=$POSTGRES_PASSWORD psql -U {{repl ConfigOption "postgresql_username" }} -h 127.0.0.1 -d {{repl ConfigOption "postgresql_database" }} -f /scratch/backup.sql && rm -f /scratch/backup.sql;"]'
+ post.hook.restore.velero.io/wait-for-ready: 'true' # waits for the pod to be ready before running the post-restore hook
+ ```
+
+1. Save and the promote the release to a development channel for testing.
+
+### Enable the Disaster Recovery Feature for Your Customers
+
+After configuring disaster recovery for your application, you can enable it on a per-customer basis with the **Allow Disaster Recovery (Alpha)** license field.
+
+To enable disaster recovery for a customer:
+
+1. In the Vendor Portal, go to the [Customers](https://vendor.replicated.com/customers) page and select the target customer.
+
+1. On the **Manage customer** page, under **License options**, enable the **Allow Disaster Recovery (Alpha)** field.
+
+ When your customer installs with Embedded Cluster, Velero will be deployed if the **Allow Disaster Recovery (Alpha)** license field is enabled.
+
+## Take Backups and Restore
+
+This section describes how your customers can configure backup storage, take backups, and restore from backups.
+
+### Configure Backup Storage and Take Backups in the Admin Console
+
+Customers with the **Allow Disaster Recovery (Alpha)** license field can configure their backup storage location and take backups from the Admin Console.
+
+To configure backup storage and take backups:
+
+1. After installing the application and logging in to the Admin Console, click the **Disaster Recovery** tab at the top of the Admin Console.
+
+1. For the desired S3-compatible backup storage location, enter the bucket, prefix (optional), access key ID, access key secret, endpoint, and region. Click **Update storage settings**.
+
+
+
+ [View a larger version of this image](/images/dr-backup-storage-settings.png)
+
+1. (Optional) From this same page, configure scheduled backups and a retention policy for backups.
+
+
+
+ [View a larger version of this image](/images/dr-scheduled-backups.png)
+
+1. In the **Disaster Recovery** submenu, click **Backups**. Backups can be taken from this screen.
+
+
+
+ [View a larger version of this image](/images/dr-backups.png)
+
+### Restore from a Backup
+
+To restore from a backup:
+
+1. SSH onto a new machine where you want to restore from a backup.
+
+1. Download the Embedded Cluster installation assets for the version of the application that was included in the backup. You can find the command for downloading Embedded Cluster installation assets in the **Embedded Cluster install instructions dialog** for the customer. For more information, [Online Installation with Embedded Cluster](/enterprise/installing-embedded).
+
+ :::note
+ The version of the Embedded Cluster installation assets must match the version that is in the backup. For more information, see [Limitations and Known Issues](#limitations-and-known-issues).
+ :::
+
+1. Run the restore command:
+
+ ```bash
+ sudo ./APP_SLUG restore
+ ```
+ Where `APP_SLUG` is the unique application slug.
+
+ Note the following requirements and guidance for the `restore` command:
+
+ * If the installation is behind a proxy, the same proxy settings provided during install must be provided to the restore command using `--http-proxy`, `--https-proxy`, and `--no-proxy`. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
+
+ * If the `--cidr` flag was used during install to the set IP address ranges for Pods and Services, this flag must be provided with the same CIDR during the restore. If this flag is not provided or is provided with a different CIDR, the restore will fail with an error message telling you to rerun with the appropriate value. However, it will take some time before that error occurs. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
+
+ * If the `--local-artifact-mirror-port` flag was used during install to change the port for the Local Artifact Mirror (LAM), you can optionally use the `--local-artifact-mirror-port` flag to choose a different LAM port during restore. For example, `restore --local-artifact-mirror-port=50000`. If no LAM port is provided during restore, the LAM port that was supplied during installation will be used. For more information, see [Embedded Cluster Install Command Options](/reference/embedded-cluster-install).
+
+ You will be guided through the process of restoring from a backup.
+
+1. When prompted, enter the information for the backup storage location.
+
+ 
+ [View a larger version of this image](/images/dr-restore.png)
+
+1. When prompted, confirm that you want to restore from the detected backup.
+
+ 
+ [View a larger version of this image](/images/dr-restore-from-backup-confirmation.png)
+
+ After some time, the Admin console URL is displayed:
+
+ 
+ [View a larger version of this image](/images/dr-restore-admin-console-url.png)
+
+1. (Optional) If the cluster should have multiple nodes, go to the Admin Console to get a join command and join additional nodes to the cluster. For more information, see [Managing Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes).
+
+1. Type `continue` when you are ready to proceed with the restore process.
+
+ 
+ [View a larger version of this image](/images/dr-restore-continue.png)
+
+ After some time, the restore process completes.
+
+ If the `restore` command is interrupted during the restore process, you can resume by rerunning the `restore` command and selecting to resume the previous restore. This is useful if your SSH session is interrupted during the restore.
diff --git a/docs/reference/embedded-overview.mdx b/docs/reference/embedded-overview.mdx
new file mode 100644
index 0000000000..0811f182f5
--- /dev/null
+++ b/docs/reference/embedded-overview.mdx
@@ -0,0 +1,121 @@
+import EmbeddedCluster from "../partials/embedded-cluster/_definition.mdx"
+import Requirements from "../partials/embedded-cluster/_requirements.mdx"
+import EmbeddedClusterPortRequirements from "../partials/embedded-cluster/_port-reqs.mdx"
+
+# Embedded Cluster Overview
+
+This topic provides an introduction to Replicated Embedded Cluster, including a description of the built-in extensions installed by Embedded Cluster, an overview of the Embedded Cluster single-node and multi-node architecture, and requirements and limitations.
+
+:::note
+If you are instead looking for information about creating Kubernetes Installers with Replicated kURL, see the [Replicated kURL](/vendor/packaging-embedded-kubernetes) section.
+:::
+
+## Overview
+
+
+
+ [View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
+
+1. Enter an Admin Console password when prompted.
+
+ The Admin Console URL is printed when the installation finishes. Access the Admin Console to begin installing your application. During the installation process in the Admin Console, you have the opportunity to add nodes if you want a multi-node cluster. Then you can provide application config, run preflights, and deploy your application.
+
+## About Configuring Embedded Cluster
+
+To install an application with Embedded Cluster, an Embedded Cluster Config must be present in the application release. The Embedded Cluster Config lets you define several characteristics about the cluster that will be created.
+
+For more information, see [Embedded Cluster Config](/reference/embedded-config).
+
+## About Installing with Embedded Cluster
+
+This section provides an overview of installing applications with Embedded Cluster.
+
+### Installation Overview
+
+The following diagram demonstrates how Kubernetes and an application are installed into a customer environment using Embedded Cluster:
+
+
+
+[View a larger version of this image](/images/embedded-cluster-install.png)
+
+As shown in the diagram above, the Embedded Cluster Config is included in the application release in the Replicated Vendor Portal and is used to generate the Embedded Cluster installation assets. Users can download these installation assets from the Replicated app service (`replicated.app`) on the command line, then run the Embedded Cluster installation command to install Kubernetes and the KOTS Admin Console. Finally, users access the Admin Console to optionally add nodes to the cluster and to configure and install the application.
+
+### Installation Options
+
+Embedded Cluster supports installations in online (internet-connected) environments and air gap environments with no outbound internet access.
+
+For online installations, Embedded Cluster also supports installing behind a proxy server.
+
+For more information about how to install with Embedded Cluster, see:
+* [Online Installation wtih Embedded Cluster](/enterprise/installing-embedded)
+* [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap)
+
+### Customer-Specific Installation Instructions
+
+To install with Embedded Cluster, you can follow the customer-specific instructions provided on the **Customer** page in the Vendor Portal. For example:
+
+
+
+[View a larger version of this image](/images/embedded-cluster-install-dialog.png)
+
+### (Optional) Serve Installation Assets Using the Vendor API
+
+To install with Embedded Cluster, you need to download the Embedded Cluster installer binary and a license. Air gap installations also require an air gap bundle. Some vendors already have a portal where their customers can log in to access documentation or download artifacts. In cases like this, you can serve the Embedded Cluster installation essets yourself using the Replicated Vendor API, rather than having customers download the assets from the Replicated app service using a curl command during installation.
+
+To serve Embedded Cluster installation assets with the Vendor API:
+
+1. If you have not done so already, create an API token for the Vendor API. See [Using the Vendor API v3](/reference/vendor-api-using#api-token-requirement).
+
+1. Call the [Get an Embedded Cluster release](https://replicated-vendor-api.readme.io/reference/getembeddedclusterrelease) endpoint to download the assets needed to install your application with Embedded Cluster. Your customers must take this binary and their license and copy them to the machine where they will install your application.
+
+ Note the following:
+
+ * (Recommended) Provide the `customerId` query parameter so that the customer’s license is included in the downloaded tarball. This mirrors what is returned when a customer downloads the binary directly using the Replicated app service and is the most useful option. Excluding the `customerId` is useful if you plan to distribute the license separately.
+
+ * If you do not provide any query parameters, this endpoint downloads the Embedded Cluster binary for the latest release on the specified channel. You can provide the `channelSequence` query parameter to download the binary for a particular release.
+
+### About Host Preflight Checks
+
+During installation, Embedded Cluster automatically runs a default set of _host preflight checks_. The default host preflight checks are designed to verify that the installation environment meets the requirements for Embedded Cluster, such as:
+* The system has sufficient disk space
+* The system has at least 2G of memory and 2 CPU cores
+* The system clock is synchronized
+
+For Embedded Cluster requirements, see [Embedded Cluster Installation Requirements](/enterprise/installing-embedded-requirements). For the full default host preflight spec for Embedded Cluster, see [`host-preflight.yaml`](https://github.com/replicatedhq/embedded-cluster/blob/main/pkg/preflights/host-preflight.yaml) in the `embedded-cluster` repository in GitHub.
+
+If any of the host preflight checks fail, installation is blocked and a message describing the failure is displayed. For more information about host preflight checks for installations on VMs or bare metal servers, see [About Host Preflights](preflight-support-bundle-about#host-preflights).
+
+#### Limitations
+
+Embedded Cluster host preflight checks have the following limitations:
+
+* The default host preflight checks for Embedded Cluster cannot be modified, and vendors cannot provide their own custom host preflight spec for Embedded Cluster.
+* Host preflight checks do not check that any application-specific requirements are met. For more information about defining preflight checks for your application, see [Defining Preflight Checks](/vendor/preflight-defining).
+
+#### Skip Host Preflight Checks
+
+You can skip host preflight checks by passing the `--skip-host-preflights` flag with the Embedded Cluster `install` command. For example:
+
+```bash
+sudo ./my-app install --license license.yaml --skip-host-preflights
+```
+
+When you skip host preflight checks, the Admin Console still runs any application-specific preflight checks that are defined in the release before the application is deployed.
+
+:::note
+Skipping host preflight checks is _not_ recommended for production installations.
+:::
+
+## About Managing Multi-Node Clusters with Embedded Cluster
+
+This section describes managing nodes in multi-node clusters created with Embedded Cluster.
+
+### Defining Node Roles for Multi-Node Clusters
+
+You can optionally define node roles in the Embedded Cluster Config. For multi-node clusters, roles can be useful for the purpose of assigning specific application workloads to nodes. If nodes roles are defined, users access the Admin Console to assign one or more roles to a node when it is joined to the cluster.
+
+For more information, see [roles](/reference/embedded-config#roles) in _Embedded Cluster Config_.
+
+### Adding Nodes
+
+Users can add nodes to a cluster with Embedded Cluster from the Admin Console. The Admin Console provides the join command used to add nodes to the cluster.
+
+For more information, see [Managing Multi-Node Clusters with Embedded Cluster](/enterprise/embedded-manage-nodes).
+
+### High Availability for Multi-Node Clusters (Alpha)
+
+Multi-node clusters are not highly available by default. Enabling high availability (HA) requires that at least three controller nodes are present in the cluster. Users can enable HA when joining the third node.
+
+For more information about creating HA multi-node clusters with Embedded Cluster, see [Enable High Availability for Multi-Node Clusters (Alpha)](/enterprise/embedded-manage-nodes#ha) in _Managing Multi-Node Clusters with Embedded Cluster_.
+
+## About Performing Updates with Embedded Cluster
+
+
+
+[View a larger version of this image](/images/helm-install-diagram.png)
+
+As shown in the diagram above, when a release containing one or more Helm charts is promoted to a channel, the Replicated Vendor Portal automatically extracts any Helm charts included in the release. These charts are pushed as OCI objects to the Replicated registry. The Replicated registry is a private OCI registry hosted by Replicated at `registry.replicated.com`. For information about security for the Replicated registry, see [Replicated Registry Security](packaging-private-registry-security).
+
+For example, if your application in the Vendor Portal is named My App and you promote a release containing a Helm chart with `name: my-chart` to a channel with the slug `beta`, then the Vendor Portal pushes the chart to the following location: `oci://registry.replicated.com/my-app/beta/my-chart`.
+
+Customers can install your Helm chart by first logging in to the Replicated registry with their unique license ID. This step ensures that any customer who installs your chart from the registry has a valid, unexpired license. After the customer logs in to the Replicated registry, they can run `helm install` to install the chart from the registry.
+
+During installation, the Replicated registry injects values into the `global.replicated` key of the parent Helm chart's values file. For more information about the values schema, see [Helm global.replicated Values Schema](helm-install-values-schema).
+
+## Limitations
+
+Helm installations have the following limitations:
+
+* Installing with Helm in air gap environments is an Beta feature. For more information, see [Installing and Updating with Helm in Air Gap Environments](/vendor/helm-install-airgap).
+* Helm CLI installations do not provide access to any of the features of the Replicated KOTS installer, such as:
+ * The KOTS Admin Console
+ * Strict preflight checks that block installation
+ * Backup and restore with snapshots
+ * Required releases with the **Prevent this release from being skipped during upgrades** option
diff --git a/docs/reference/helm-install-release.md b/docs/reference/helm-install-release.md
new file mode 100644
index 0000000000..4c1a8a0e8e
--- /dev/null
+++ b/docs/reference/helm-install-release.md
@@ -0,0 +1,55 @@
+import DependencyYaml from "../partials/replicated-sdk/_dependency-yaml.mdx"
+import RegistryLogout from "../partials/replicated-sdk/_registry-logout.mdx"
+import HelmPackage from "../partials/helm/_helm-package.mdx"
+
+# Packaging a Helm Chart for a Release
+
+This topic describes how to package a Helm chart and the Replicated SDK into a chart archive that can be added to a release.
+
+## Overview
+
+To add a Helm chart to a release, you first add the Replicated SDK as a dependency of the Helm chart and then package the chart and its dependencies as a `.tgz` chart archive.
+
+The Replicated SDK is a Helm chart can be installed as a small service alongside your application. The SDK provides access to key Replicated features, such as support for collecting custom metrics on application instances. For more information, see [About the Replicated SDK](replicated-sdk-overview).
+
+## Requirements and Recommendations
+
+This section includes requirements and recommendations for Helm charts.
+
+### Chart Version Requirement
+
+The chart version in your Helm chart must comply with image tag format requirements. A valid tag can contain only lowercase and uppercase letters, digits, underscores, periods, and dashes.
+
+The chart version must also comply with the Semantic Versioning (SemVer) specification. When you run the `helm install` command without the `--version` flag, Helm retrieves the list of all available image tags for the chart from the registry and compares them using the SemVer comparison rules described in the SemVer specification. The version that is installed is the version with the largest tag value. For more information about the SemVer specification, see the [Semantic Versioning](https://semver.org) documentation.
+
+### Chart Naming
+
+For releases that contain more than one Helm chart, Replicated recommends that you use unique names for each top-level Helm chart in the release. This aligns with Helm best practices and also avoids potential conflicts in filenames during installation that could cause the installation to fail. For more information, see [Installation Fails for Release With Multiple Helm Charts](helm-install-troubleshooting#air-gap-values-file-conflict) in _Troubleshooting Helm Installations_.
+
+### Helm Best Practices
+
+Replicated recommends that you review the [Best Practices](https://helm.sh/docs/chart_best_practices/) guide in the Helm documentation to ensure that your Helm chart or charts follows the required and recommended conventions.
+
+## Package a Helm Chart {#release}
+
+This procedure shows how to create a Helm chart archive to add to a release. For more information about the Helm CLI commands in this procedure, see the [Helm Commands](https://helm.sh/docs/helm/helm/) section in the Helm documentation.
+
+To package a Helm chart so that it can be added to a release:
+
+1. In your application Helm chart `Chart.yaml` file, add the YAML below to declare the SDK as a dependency. If your application is installed as multiple charts, declare the SDK as a dependency of the chart that customers install first. Do not declare the SDK in more than one chart.
+
+
+
+[View a larger version of this image](/images/helm-kots-install-options.png)
+
+For a tutorial that demonstrates how to add a sample Helm chart to a release and then install the release using both KOTS and the Helm CLI, see [Install a Helm Chart with KOTS and the Helm CLI](/vendor/tutorial-kots-helm-setup).
+
+## How KOTS Deploys Helm Charts
+
+This section describes how KOTS uses the HelmChart custom resource to deploy Helm charts.
+
+### About the HelmChart Custom Resource
+
+| HelmChart v1beta2 | +HelmChart v1beta1 | +Description | +
|---|---|---|
apiVersion: kots.io/v1beta2 |
+ apiVersion: kots.io/v1beta1 |
+ apiVersion is updated to kots.io/v1beta2 |
+
releaseName |
+ chart.releaseName |
+ releaseName is a top level field under spec |
+
| N/A | +helmVersion |
+ helmVersion field is removed |
+
| N/A | +useHelmInstall |
+ useHelmInstall field is removed |
+
+
+[View a larger version of this image](/images/resource-status-hover-current-state.png)
+
+Viewing these resource status details is helpful for understanding which resources are contributing to the aggregate application status. For example, when an application has an Unavailable status, that means that one or more resources are Unavailable. By viewing the resource status insights on the **Instance details** page, you can quickly understand which resource or resources are Unavailable for the purpose of troubleshooting.
+
+Granular resource status details are automatically available when the Replicated SDK is installed alongside the application. For information about how to distribute and install the SDK with your application, see [Installing the Replicated SDK](/vendor/replicated-sdk-installing).
+
+## Understanding Application Status
+
+This section provides information about how Replicated interprets and aggregates the status of Kubernetes resources for your application to report an application status.
+
+### About Resource Statuses {#resource-statuses}
+
+Possible resource statuses are Ready, Updating, Degraded, Unavailable, and Missing.
+
+The following table lists the supported Kubernetes resources and the conditions that contribute to each status:
+
+| Domain | +Description | +
|---|---|
| `replicated.app` * | +Upstream application YAML and metadata is pulled from `replicated.app`. The current running version of the application (if any), as well as a license ID and application ID to authenticate, are all sent to `replicated.app`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `replicated.app`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L60-L65) in GitHub. |
+
| `registry.replicated.com` | +Some applications host private images in the Replicated registry at this domain. The on-prem docker client uses a license ID to authenticate to `registry.replicated.com`. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA. For the range of IP addresses for `registry.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L20-L25) in GitHub. |
+
| `proxy.replicated.com` | +Private Docker images are proxied through `proxy.replicated.com`. This domain is owned by Replicated, Inc., which is headquartered in Los Angeles, CA. For the range of IP addresses for `proxy.replicated.com`, see [replicatedhq/ips](https://github.com/replicatedhq/ips/blob/main/ip_addresses.json#L52-L57) in GitHub. |
+
+
+ [View a larger image](/images/helm-install-button.png)
+
+1. In the **Helm install instructions** dialog, run the first command to log in to the Replicated registry:
+
+ ```bash
+ helm registry login registry.replicated.com --username EMAIL_ADDRESS --password LICENSE_ID
+ ```
+ Where:
+ * `EMAIL_ADDRESS` is the customer's email address
+ * `LICENSE_ID` is the ID of the customer's license
+
+ :::note
+ You can safely ignore the following warning message: `WARNING: Using --password via the CLI is insecure.` This message is displayed because using the `--password` flag stores the password in bash history. This login method is not insecure.
+
+ Alternatively, to avoid the warning message, you can click **(show advanced)** in the **Helm install instructions** dialog to display a login command that excludes the `--password` flag. With the advanced login command, you are prompted for the password after running the command.
+ :::
+
+1. (Optional) Run the second and third commands to install the preflight plugin and run preflight checks. If no preflight checks are defined, these commands are not displayed. For more information about defining and running preflight checks, see [About Preflight Checks and Support Bundles](preflight-support-bundle-about).
+
+1. Run the fourth command to install using Helm:
+
+ ```bash
+ helm install RELEASE_NAME oci://registry.replicated.com/APP_SLUG/CHANNEL/CHART_NAME
+ ```
+ Where:
+ * `RELEASE_NAME` is the name of the Helm release.
+ * `APP_SLUG` is the slug for the application. For information about how to find the application slug, see [Get the Application Slug](/vendor/vendor-portal-manage-app#slug).
+ * `CHANNEL` is the lowercased name of the channel where the release was promoted, such as `beta` or `unstable`. Channel is not required for releases promoted to the Stable channel.
+ * `CHART_NAME` is the name of the Helm chart.
+
+ :::note
+ To install the SDK with custom RBAC permissions, include the `--set` flag with the `helm install` command to override the value of the `replicated.serviceAccountName` field with a custom service account. For more information, see [Customizing RBAC for the SDK](/vendor/replicated-sdk-customizing#customize-rbac-for-the-sdk).
+ :::
+
+1. (Optional) In the Vendor Portal, click **Customers**. You can see that the customer you used to install is marked as **Active** and the details about the application instance are listed under the customer name.
+
+ **Example**:
+
+ 
+ [View a larger version of this image](/images/sdk-customer-active-example.png)
\ No newline at end of file
diff --git a/docs/reference/installer-history.mdx b/docs/reference/installer-history.mdx
new file mode 100644
index 0000000000..c23ed6ea93
--- /dev/null
+++ b/docs/reference/installer-history.mdx
@@ -0,0 +1,36 @@
+import KurlAvailability from "../partials/kurl/_kurl-availability.mdx"
+
+# Installer History
+
+| Label | +Type | +Description | +
|---|---|---|
| customer_id | +string | +Customer identifier | +
| customer_name | +string | +The customer name | +
| customer_created_date | +timestamptz | +The date the license was created | +
| customer_license_expiration_date | +timestamptz | +The expiration date of the license | +
| customer_channel_id | +string | +The channel id the customer is assigned | +
| customer_channel_name | +string | +The channel name the customer is assigned | +
| customer_app_id | +string | +App identifier | +
| customer_last_active | +timestamptz | +The date the customer was last active | +
| customer_type | +string | +One of prod, trial, dev, or community | +
| customer_status | +string | +The current status of the customer | +
| customer_is_airgap_enabled | +boolean | +The feature the customer has enabled - Airgap | +
| customer_is_geoaxis_supported | +boolean | +The feature the customer has enabled - GeoAxis | +
| customer_is_gitops_supported | +boolean | +The feature the customer has enabled - KOTS Auto-GitOps | +
| customer_is_embedded_cluster_download_enabled | +boolean | +The feature the customer has enabled - Embedded Cluster | +
| customer_is_identity_service_supported | +boolean | +The feature the customer has enabled - Identity | +
| customer_is_snapshot_supported | +boolean | +The feature the customer has enabled - Snapshot | +
| customer_has_entitlements | +boolean | +Indicates the presence or absence of entitlements and entitlment_* columns | +
| customer_entitlement__* | +string/integer/boolean | +The values of any custom license fields configured for the customer. For example, customer_entitlement__active-users. | +
| customer_created_by_id | +string | +The ID of the actor that created this customer: user ID or a hashed value of a token. | +
| customer_created_by_type | +string | +The type of the actor that created this customer: user, service-account, or service-account. | +
| customer_created_by_description | +string | +The description of the actor that created this customer. Includes username or token name depending on actor type. | +
| customer_created_by_link | +string | +The link to the actor that created this customer. | +
| customer_created_by_timestamp | +timestamptz | +The date the customer was created by this actor. When available, matches the value in the customer_created_date column | +
| customer_updated_by_id | +string | +The ID of the actor that updated this customer: user ID or a hashed value of a token. | +
| customer_updated_by_type | +string | +The type of the actor that updated this customer: user, service-account, or service-account. | +
| customer_updated_by_description | +string | +The description of the actor that updated this customer. Includes username or token name depending on actor type. | +
| customer_updated_by_link | +string | +The link to the actor that updated this customer. | +
| customer_updated_by_timestamp | +timestamptz | +The date the customer was updated by this actor. | +
| instance_id | +string | +Instance identifier | +
| instance_is_active | +boolean | +The instance has pinged within the last 24 hours | +
| instance_first_reported_at | +timestamptz | +The timestamp of the first recorded check-in for the instance. | +
| instance_last_reported_at | +timestamptz | +The timestamp of the last recorded check-in for the instance. | +
| instance_first_ready_at | +timestamptz | +The timestamp of when the cluster was considered ready | +
| instance_kots_version | +string | +The version of KOTS or the Replicated SDK that the instance is running. The version is displayed as a Semantic Versioning compliant string. | +
| instance_k8s_version | +string | +The version of Kubernetes running in the cluster. | +
| instance_is_airgap | +boolean | +The cluster is airgaped | +
| instance_is_kurl | +boolean | +The instance is installed in a Replicated kURL cluster (embedded cluster) | +
| instance_last_app_status | +string | +The instance's last reported app status | +
| instance_client | +string | +Indicates whether this instance is managed by KOTS or if it's a Helm CLI deployed instance using the SDK. | +
| instance_kurl_node_count_total | +integer | +Total number of nodes in the cluster. Applies only to kURL clusters. | +
| instance_kurl_node_count_ready | +integer | +Number of nodes in the cluster that are in a healthy state and ready to run Pods. Applies only to kURL clusters. | +
| instance_cloud_provider | +string | +The cloud provider where the instance is running. Cloud provider is determined by the IP address that makes the request. | +
| instance_cloud_provider_region | +string | +The cloud provider region where the instance is running. For example, us-central1-b | +
| instance_app_version | +string | +The current application version | +
| instance_version_age | +string | +The age (in days) of the currently deployed release. This is relative to the latest available release on the channel. | +
| instance_is_gitops_enabled | +boolean | +Reflects whether the end user has enabled KOTS Auto-GitOps for deployments in their environment | +
| instance_gitops_provider | +string | +If KOTS Auto-GitOps is enabled, reflects the GitOps provider in use. For example, GitHub Enterprise. | +
| instance_is_skip_preflights | +boolean | +Indicates whether an end user elected to skip preflight check warnings or errors | +
| instance_preflight_status | +string | +The last reported preflight check status for the instance | +
| instance_k8s_distribution | +string | +The Kubernetes distribution of the cluster. | +
| instance_has_custom_metrics | +boolean | +Indicates the presence or absence of custom metrics and custom_metric__* columns | +
| instance_custom_metrics_reported_at | +timestamptz | +Timestamp of latest custom_metric | +
| custom_metric__* | +string/integer/boolean | +The values of any custom metrics that have been sent by the instance. For example, custom_metric__active_users | +
| instance_has_tags | +boolean | +Indicates the presence or absence of instance tags and instance_tag__* columns | +
| instance_tag__* | +string/integer/boolean | +The values of any instance tag that have been set by the vendor. For example, instance_tag__name | +
+
+ [View a larger version of this image](/images/resource-status-hover-current-state.png)
+
+* **App version**: The version label of the currently running release. You define the version label in the release properties when you promote the release. For more information about defining release properties, see [Properties](releases-about#properties) in _About Channels and Releases_.
+
+ If there is no version label for the release, then the Vendor Portal displays the release sequence in the **App version** field. You can find the sequence number associated with a release by running the `replicated release ls` command. See [release ls](/reference/replicated-cli-release-ls) in the _Replicated CLI_ documentation.
+
+* **Version age**: The absolute and relative ages of the instance:
+
+ * **Absolute age**: `now - current_release.promoted_date`
+
+ The number of days since the currently running application version was promoted to the channel. For example, if the instance is currently running version 1.0.0, and version 1.0.0 was promoted to the channel 30 days ago, then the absolute age is 30.
+
+ * **Relative age (Days Behind Latest)**: `channel.latest_release.promoted_date - current_release.promoted_date`
+
+ The number of days between when the currently running application version was promoted to the channel and when the latest available version on the channel was promoted.
+
+ For example, the instance is currently running version 1.0.0, which was promoted to the Stable channel. The latest version available on the Stable channel is 1.5.0. If 1.0.0 was promoted 30 days ago and 1.5.0 was promoted 10 days ago, then the relative age of the application instance is 20 days.
+
+* **Versions behind**: The number of versions between the currently running version and the latest version available on the channel where the instance is assigned.
+
+ For example, the instance is currently running version 1.0.0, which was promoted to the Stable channel. If the later versions 1.1.0, 1.2.0, 1.3.0, 1.4.0, and 1.5.0 were also promoted to the Stable channel, then the instance is five versions behind.
+
+* **Last check-in**: The timestamp when the instance most recently sent data to the Vendor Portal.
+
+### Instance Insights {#insights}
+
+The **Insights** section includes the following metrics computed by the Vendor Portal:
+
+* [Uptime](#uptime)
+* [Time to Install](#time-to-install)
+
+#### Uptime
+
+The Vendor Portal computes the total uptime for the instance as the fraction of time that the instance spends with a Ready, Updating, or Degraded status. The Vendor Portal also provides more granular details about uptime in the **Instance Uptime** graph. See [Instance Uptime](#instance-uptime) below.
+
+High uptime indicates that the application is reliable and able to handle the demands of the customer environment. Low uptime might indicate that the application is prone to errors or failures. By measuring the total uptime, you can better understand the performance of your application.
+
+The following table lists the application statuses that are associated with an Up or Down state in the total uptime calculation:
+
+| Uptime State | +Application Statuses | +
|---|---|
| Up | +Ready, Updating, or Degraded | +
| Down | +Missing or Unavailable | +
| Uptime State | +Application Statuses | +
|---|---|
| Up | +Ready or Updating | +
| Degraded | +Degraded | +
| Down | +Missing or Unavailable | +
| Label | +Description | +
|---|---|
| App Channel | +The ID of the channel the application instance is assigned. | +
| App Version | +The version label of the release that the instance is currently running. The version label is the version that you assigned to the release when promoting it to a channel. | +
| Label | +Description | +
|---|---|
| App Status | +
+ A string that represents the status of the application. Possible values: Ready, Updating, Degraded, Unavailable, Missing. For applications that include the Replicated SDK, hover over the application status to view the statuses of the indiviudal resources deployed by the application. +For more information, see Enabling and Understanding Application Status. + |
+
| Label | +Description | +
|---|---|
| Cluster Type | +
+ Indicates if the cluster was provisioned by kURL. +Possible values: +
For more information about kURL clusters, see Creating a kURL installer. + |
+
| Kubernetes Version | +The version of Kubernetes running in the cluster. | +
| Kubernetes Distribution | +
+ The Kubernetes distribution of the cluster. +Possible values: +
|
+
| kURL Nodes Total | +
+ Total number of nodes in the cluster. +Note: Applies only to kURL clusters. + |
+
| kURL Nodes Ready | +
+ Number of nodes in the cluster that are in a healthy state and ready to run Pods. +Note: Applies only to kURL clusters. + |
+
| New kURL Installer | +
+ The ID of the kURL installer specification that kURL used to provision the cluster. Indicates that a new Installer specification was added. An installer specification is a manifest file that has For more information about installer specifications for kURL, see Creating a kURL installer. +Note: Applies only to kURL clusters. + |
+
| Label | +Description | +
|---|---|
| Cloud Provider | +
+ The cloud provider where the instance is running. Cloud provider is determined by the IP address that makes the request. +Possible values: +
|
+
| Cloud Region | +
+ The cloud provider region where the instance is running. For example, |
+
| Label | +Description | +
|---|---|
| KOTS Version | +The version of KOTS that the instance is running. KOTS version is displayed as a Semantic Versioning compliant string. | +
| Label | +Description | +
|---|---|
| Replicated SDK Version | +The version of the Replicated SDK that the instance is running. SDK version is displayed as a Semantic Versioning compliant string. | +
| Label | +Description | +
|---|---|
| Versions Behind | +
+ The number of versions between the version that the instance is currently running and the latest version available on the channel. +Computed by the Vendor Portal each time it receives instance data. + |
+
+
+1. On the Instance Details page, click **Notifications**.
+
+
+
+1. From the **Configure Instance Notifications** dialog, select the types of notifications to enable.
+
+ 
+
+ [View a larger version of this image](/images/instance-notifications-dialog.png)
+
+1. Click **Save**.
+
+1. Repeat these steps to configure notifications for other application instances.
+
+
+## Test Notifications
+
+After you enable notifications for a running development instance, test that your notifications are working as expected.
+
+Do this by forcing your application into a non-ready state. For example, you can delete one or more application Pods and wait for a ReplicationController to recreate them.
+
+Then, look for notifications in the assigned Slack channel. You also receive an email if you enabled email notifications.
+
+:::note
+There is a 30-second buffer between event detection and notifications being sent. This buffer provides better roll-ups and reduces noise.
+:::
\ No newline at end of file
diff --git a/docs/reference/kots-cli-admin-console-garbage-collect-images.md b/docs/reference/kots-cli-admin-console-garbage-collect-images.md
deleted file mode 100644
index 72d97479e4..0000000000
--- a/docs/reference/kots-cli-admin-console-garbage-collect-images.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# admin-console garbage-collect-images
-
-Starts image garbage collection.
-The KOTS Admin Console must be running and an application must be installed in order to use this command.
-
-### Usage
-```bash
-kubectl kots admin-console garbage-collect-images -n | Flag | -Type | -Description | -
--rootdir |
- string | -Root directory where the YAML will be written (default `${HOME}` or `%USERPROFILE%`) | -
--namespace |
- string | -Target namespace for the Admin Console | -
--shared-password |
- string | -Shared password to use when deploying the Admin Console | -
--http-proxy |
- string | -Sets HTTP_PROXY environment variable in all KOTS Admin Console components | -
--http-proxy |
- string | -Sets HTTP_PROXY environment variable in all KOTS Admin Console | -
--no-proxy |
- string | -Sets NO_PROXY environment variable in all KOTS Admin Console components | -
--private-ca-configmap |
- string | -Name of a ConfigMap containing private CAs to add to the kotsadm deployment | -
--with-minio |
- bool | -Set to true to include a local minio instance to be used for storage (default true) | -
--minimal-rbac |
- bool | -Set to true to include a local minio instance to be used for storage (default true) | -
--additional-namespaces |
- string | -Comma delimited list to specify additional namespace(s) managed by KOTS outside where it is to be deployed. Ignored without with --minimal-rbac=true |
-
--storage-class |
- string | -Sets the storage class to use for the KOTS Admin Console components. Default: unset, which means the default storage class will be used | -
| Flag | -Type | -Description | -
|---|
| Flag | -Type | -Description | -
--additional-annotations |
- bool | -Additional annotations to add to kotsadm pods. | -
--additional-labels |
- bool | -Additional labels to add to kotsadm pods. | -
--airgap |
- bool | -Set to true to run install in air gapped mode. Setting --airgap-bundle implies --airgap=true. Default: false. For more information, see Air Gap Installation in Existing Clusters with KOTS. |
-
--airgap-bundle |
- string | -Path to the application air gap bundle where application metadata will be loaded from. Setting --airgap-bundle implies --airgap=true. For more information, see Air Gap Installation in Existing Clusters with KOTS. |
-
--app-version-label |
- string | -The application version label to install. If not specified, the latest version is installed. | -
--config-values |
- string | -Path to a manifest file containing configuration values. This manifest must be apiVersion: kots.io/v1beta1 and kind: ConfigValues. For more information, see Installing with the KOTS CLI. |
-
--copy-proxy-env |
- bool | -Copy proxy environment variables from current environment into all Admin Console components. Default: false |
-
--disable-image-push |
- bool | -Set to true to disable images from being pushed to private registry. Default: false |
-
--ensure-rbac |
- bool | -When false, KOTS does not attempt to create the RBAC resources necessary to manage applications. Default: true. If a role specification is needed, use the [generate-manifests](kots-cli-admin-console-generate-manifests) command. |
-
--http-proxy |
- string | -Sets HTTP_PROXY environment variable in all Admin Console components. | -
--https-proxy |
- string | -Sets HTTPS_PROXY environment variable in all Admin Console components. | -
--license-file |
- string | -Path to a license file. | -
--local-path |
- string | -Specify a local-path to test the behavior of rendering a Replicated application locally. Only supported on Replicated application types. | -
--name |
- string | -Name of the application to use in the Admin Console. | -
--no-port-forward |
- bool | -Set to true to disable automatic port forward. Default: false |
-
--no-proxy |
- string | -Sets NO_PROXY environment variable in all Admin Console components. | -
--port |
- string | -Override the local port to access the Admin Console. Default: 8800 | -
--private-ca-configmap |
- string | -Name of a ConfigMap containing private CAs to add to the kotsadm deployment. | -
--preflights-wait-duration |
- string | -Timeout to be used while waiting for preflights to complete. Must be in [Go duration](https://pkg.go.dev/time#ParseDuration) format. For example, 10s, 2m. Default: 15m | -
--repo |
- string | -Repo URI to use when installing a Helm chart. | -
--shared-password |
- string | -Shared password to use when deploying the Admin Console. | -
--skip-compatibility-check |
- bool | -Set to true to skip compatibility checks between the current KOTS version and the application. Default: false |
-
--skip-preflights |
- bool | -Set to true to skip preflight checks. Default: false. If any strict preflight checks are configured, the --skip-preflights flag is not honored because strict preflight checks must run and contain no failures before the application is deployed. For more information, see [Defining Preflight Checks](/vendor/preflight-defining). |
-
--skip-rbac-check |
- bool | -Set to true to bypass RBAC check. Default: false |
-
--skip-registry-check |
- bool | -Set to true to skip the connectivity test and validation of the provided registry information. Default: false |
-
--use-minimal-rbac |
- bool | -When set to true, KOTS RBAC permissions are limited to the namespace where it is installed. To use --use-minimal-rbac, the application must support namespace-scoped installations and the user must have the minimum RBAC permissions required by KOTS in the target namespace. For a complete list of requirements, see [Namespace-scoped RBAC Requirements](/enterprise/installing-general-requirements#namespace-scoped) in _Installation Requirements_. Default: false |
-
--wait-duration |
- string | -Timeout to be used while waiting for individual components to be ready. Must be in [Go duration](https://pkg.go.dev/time#ParseDuration) format. For example, 10s, 2m. Default: 2m | -
--with-minio |
- bool | -When set to true, KOTS deploys a local MinIO instance for storage and uses MinIO for host path and NFS snapshot storage. Default: true |
-
--storage-class |
- string | -Sets the storage class to use for the KOTS Admin Console components. Default: unset, which means the default storage class will be used | -
| Flag | -Type | -Description | -
|---|---|---|
--force |
- bool |
-
- Removes the reference even if the application has already been deployed. - |
-
--undeploy |
- bool |
-
- Un-deploys the application by deleting all its resources from the cluster. When Note: The following describes how
|
-
-n |
- string |
- The namespace where the target application is deployed. Use |
-
| Flag | -Type | -Description | -
| `-n, --namespace` | -string | -The namespace of the Admin Console (required) | -
| `--hostpath` | -string | -A local host path on the node | -
| `--force-reset` | -bool | -Bypass the reset prompt and force resetting the nfs path. (default `false`) | -
| `--output` | -string | -Output format. Supported values: `json` | -
| Flag | -Type | -Description | -
| `-n, --namespace` | -string | -The namespace of the Admin Console (required) | -
| `--nfs-server` | -string | -The hostname or IP address of the NFS server (required) | -
| `--nfs-path` | -string | -The path that is exported by the NFS server (required) | -
| `--force-reset` | -bool | -Bypass the reset prompt and force resetting the nfs path. (default `false`) | -
| `--output` | -string | -Output format. Supported values: `json` | -
| Flag | -Type | -Description | -
| `-n, --namespace` | -string | -The namespace of the Admin Console (required) | -
| `--access-key-id` | -string | -The AWS access key ID to use for accessing the bucket (required) | -
| `--bucket` | -string | -Name of the object storage bucket where backups should be stored (required) | -
| `--endpoint` | -string | -The S3 endpoint (for example, http://some-other-s3-endpoint) (required) | -
| `--path` | -string | -Path to a subdirectory in the object store bucket | -
| `--region` | -string | -The region where the bucket exists (required) | -
| `--secret-access-key` | -string | -The AWS secret access key to use for accessing the bucket (required) | -
| `--cacert` | -string | -File containing a certificate bundle to use when verifying TLS connections to the object store | -
| `--skip-validation` | -bool | -Skip the validation of the S3 bucket (default `false`) | -
+
+[View a larger version of this image](/images/gitea-ec-ready.png)
+
+### How do Embedded Cluster installations work?
+
+To install with Embedded Cluster, users first download and extract the Embedded Cluster installation assets for the target application release on their VM or bare metal server. Then, they run an Embedded Cluster installation command to provision the cluster. During installation, Embedded Cluster also installs Replicated KOTS in the cluster, which deploys the Admin Console.
+
+After the installation command finishes, users log in to the Admin Console to provide application configuration values, optionally join more nodes to the cluster, run preflight checks, and deploy the application.
+
+Customer-specific Embedded Cluster installation instructions are provided in the Replicated Vendor Portal. For more information, see [Installing with Embedded Cluster](/enterprise/installing-embedded).
+
+### Does Replicated support installations into air gap environments?
+
+Yes. The Embedded Cluster and KOTS installers support installation in _air gap_ environments with no outbound internet access.
+
+To support air gap installations, vendors can build air gap bundles for their application in the Vendor Portal that contain all the required assets for a specific release of the application. Additionally, Replicated provides bundles that contain the assets for the Replicated installers.
+
+For more information about how to install with Embedded Cluster and KOTS in air gap environments, see [Air Gap Installation with Embedded Cluster](/enterprise/installing-embedded-air-gap) and [Air Gap Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster-airgapped).
+
+### Can I deploy Helm charts with KOTS?
+
+Yes. An application deployed with KOTS can use one or more Helm charts, can include Helm charts as components, and can use more than a single instance of any Helm chart. Each Helm chart requires a unique HelmChart custom resource (`apiVersion: kots.io/v1beta2`) in the release.
+
+For more information, see [About Distributing Helm Charts with KOTS](/vendor/helm-native-about).
+
+### What's the difference between Embedded Cluster and kURL?
+
+Replicated Embedded Cluster is a successor to Replicated kURL. Compared to kURL, Embedded Cluster feature offers significantly faster installation, updates, and node joins, a redesigned Admin Console UI, improved support for multi-node clusters, one-click updates that update the application and the cluster at the same time, and more.
+
+
+
+[View a larger version of this image](/images/gitea-open-app.png)
\ No newline at end of file
diff --git a/docs/reference/kurl-reset.mdx b/docs/reference/kurl-reset.mdx
new file mode 100644
index 0000000000..2d4cf3a33d
--- /dev/null
+++ b/docs/reference/kurl-reset.mdx
@@ -0,0 +1,29 @@
+import KurlAvailability from "../partials/kurl/_kurl-availability.mdx"
+
+# Resetting a kURL Cluster
+
+
+
+ [View a larger version of this image](/images/customer-reporting-details.png)
+
+* Archive customers. For more information, see [Creating and Managing Customers](releases-creating-customer).
+
+* Click on a customer on the **Customers** page to access the following customer-specific pages:
+ * [Reporting](#about-the-customer-reporting-page)
+ * [Manage customer](#about-the-manage-customer-page)
+ * [Support bundles](#about-the-customer-support-bundles-page)
+
+### About the Customer Reporting Page
+
+The **Reporting** page for a customer displays data about the active application instances associated with each customer. The following shows an example of the **Reporting** page for a customer that has two active application instances:
+
+
+[View a larger version of this image](/images/customer-reporting-page.png)
+
+For more information about interpreting the data on the **Reporting** page, see [Customer Reporting](customer-reporting).
+
+### About the Manage Customer Page
+
+The **Manage customer** page for a customer displays details about the customer license, including the customer name and email, the license expiration policy, custom license fields, and more.
+
+The following shows an example of the **Manage customer** page:
+
+
+[View a larger version of this image](/images/customer-details.png)
+
+From the **Manage customer** page, you can view and edit the customer's license fields or archive the customer. For more information, see [Creating and Managing Customers](releases-creating-customer).
+
+### About the Customer Support Bundles Page
+
+The **Support bundles** page for a customer displays details about the support bundles collected from the customer. Customers with the **Support Bundle Upload Enabled** entitlement can provide support bundles through the KOTS Admin Console, or you can upload support bundles manually in the Vendor Portal by going to **Troubleshoot > Upload a support bundle**. For more information about uploading and analyzing support bundles, see [Inspecting Support Bundles](support-inspecting-support-bundles).
+
+The following shows an example of the **Support bundles** page:
+
+
+[View a larger version of this image](/images/customer-support-bundles.png)
+
+As shown in the screenshot above, the **Support bundles** page lists details about the collected support bundles, such as the date the support bundle was collected and the debugging insights found. You can click on a support bundle to view it in the **Support bundle analysis** page. You can also click **Delete** to delete the support bundle, or click **Customer Reporting** to view the **Reporting** page for the customer.
+
+## About Licensing with Replicated
+
+### About Syncing Licenses
+
+When you edit customer licenses for an application installed with a Replicated installer (Embedded Cluster, KOTS, kURL), your customers can use the KOTS Admin Console to get the latest license details from the Vendor Portal, then deploy a new version that includes the license changes. Deploying a new version with the license changes ensures that any license fields that you have templated in your release using [KOTS template functions](/reference/template-functions-about) are rendered with the latest license details.
+
+For online instances, KOTS pulls license details from the Vendor Portal when:
+* A customer clicks **Sync license** in the Admin Console.
+* An automatic or manual update check is performed by KOTS.
+* An update is performed with Replicated Embedded Cluster. See [Performing Updates with Embedded Cluster](/enterprise/updating-embedded).
+* An application status changes. See [Current State](instance-insights-details#current-state) in _Instance Details_.
+
+For more information, see [Updating Licenses in the Admin Console](/enterprise/updating-licenses).
+
+### About Syncing Licenses in Air-Gapped Environments
+
+To update licenses in air gap installations, customers need to upload the updated license file to the Admin Console.
+
+After you update the license fields in the Vendor Portal, you can notify customers by either sending them a new license file or instructing them to log into their Download Portal to downlaod the new license.
+
+For more information, see [Updating Licenses in the Admin Console](/enterprise/updating-licenses).
+
+### Retrieving License Details with the SDK API
+
+The [Replicated SDK](replicated-sdk-overview) includes an in-cluster API that can be used to retrieve up-to-date customer license information from the Vendor Portal during runtime through the [`license`](/reference/replicated-sdk-apis#license) endpoints. This means that you can add logic to your application to get the latest license information without the customer needing to perform a license update. The SDK API polls the Vendor Portal for updated data every four hours.
+
+In KOTS installations that include the SDK, users need to update their licenses from the Admin Console as described in [About Syncing Licenses](#about-syncing-licenses) above. However, any logic in your application that uses the SDK API will update the user's license information without the customer needing to deploy a license update in the Admin Console.
+
+For information about how to use the SDK API to query license entitlements at runtime, see [Querying Entitlements with the Replicated SDK API](/vendor/licenses-reference-sdk).
+
+### License Expiration Handling {#expiration}
+
+The built-in `expires_at` license field defines the expiration date for a customer license. When you set an expiration date in the Vendor Portal, the `expires_at` field is set to midnight UTC on the date selected.
+
+Replicated enforces the following logic when a license expires:
+* By default, instances with expired licenses continue to run.
+ To change the behavior of your application when a license expires, you can can add custom logic in your application that queries the `expires_at` field using the Replicated SDK in-cluster API. For more information, see [Querying Entitlements with the Replicated SDK API](/vendor/licenses-reference-sdk).
+* Expired licenses cannot log in to the Replicated registry to pull a Helm chart for installation or upgrade.
+* Expired licenses cannot pull application images through the Replicated proxy registry or from the Replicated registry.
+* In Replicated KOTS installations, KOTS prevents instances with expired licenses from receiving updates.
+
+### Replacing Licenses for Existing Installations
+
+Community licenses are the only license type that can be replaced with a new license without needing to reinstall the application. For more information, see [Community Licenses](licenses-about-types).
+
+Unless the existing customer is using a community license, it is not possible to replace one license with another license without reinstalling the application. When you need to make changes to a customer's entitlements, Replicated recommends that you edit the customer's license details in the Vendor Portal, rather than issuing a new license.
diff --git a/docs/reference/licenses-adding-custom-fields.md b/docs/reference/licenses-adding-custom-fields.md
new file mode 100644
index 0000000000..576e99128d
--- /dev/null
+++ b/docs/reference/licenses-adding-custom-fields.md
@@ -0,0 +1,111 @@
+# Managing Customer License Fields
+
+This topic describes how to manage customer license fields in the Replicated Vendor Portal, including how to add custom fields and set initial values for the built-in fields.
+
+## Set Initial Values for Built-In License Fields (Beta)
+
+You can set initial values to populate the **Create Customer** form in the Vendor Portal when a new customer is created. This ensures that each new customer created from the Vendor Portal UI starts with the same set of built-in license field values.
+
+:::note
+Initial values are not applied to new customers created through the Vendor API v3. For more information, see [Create a customer](https://replicated-vendor-api.readme.io/reference/createcustomer-1) in the Vendor API v3 documentation.
+:::
+
+These _initial_ values differ from _default_ values in that setting initial values does not update the license field values for any existing customers.
+
+To set initial values for built-in license fields:
+
+1. In the Vendor Portal, go to **License Fields**.
+
+1. Under **Built-in license options**, click **Edit** next to each license field where you want to set an initial value.
+
+ 
+
+ [View a larger version of this image](/images/edit-initial-value.png)
+
+## Manage Custom License Fields
+
+You can create custom license fields in the Vendor Portal. For example, you can create a custom license field to set the number of active users permitted. Or, you can create a field that sets the number of nodes a customer is permitted on their cluster.
+
+The custom license fields that you create are displayed in the Vendor Portal for all new and existing customers. If the custom field is not hidden, it is also displayed to customers under the **Licenses** tab in the Replicated Admin Console.
+
+### Limitation
+
+The maximum size for a license field value is 64KB.
+
+### Create Custom License Fields
+
+To create a custom license field:
+
+1. Log in to the Vendor Portal and select the application.
+
+1. On the **License Fields** page, click **Create license field**.
+
+
+
+ [View a larger version of this image](/images/license-add-custom-field.png)
+
+1. Complete the following fields:
+
+ | Field | Description |
+ |-----------------------|------------------------|
+ | Field | The name used to reference the field. This value cannot be changed. |
+ | Title| The display name for the field. This is how the field appears in the Vendor Portal and the Admin Console. You can change the title in the Vendor Portal. |
+ | Type| The field type. Supported formats include integer, string, text (multi-line string), and boolean values. This value cannot be changed. |
+ | Default | The default value for the field for both existing and new customers. It is a best practice to provide a default value when possible. The maximum size for a license field value is 64KB. |
+ | Required | If checked, this prevents the creation of customers unless this field is explicitly defined with a value. |
+ | Hidden | If checked, the field is not visible to your customer in the Replicated Admin Console. The field is still visible to you in the Vendor Portal. **Note**: The Hidden field is displayed only for vendors with access to the Replicated installers (KOTS, kURL, Embedded Cluster). |
+
+### Update Custom License Fields
+
+To update a custom license field:
+
+1. Log in to the Vendor Portal and select the application.
+1. On the **License Fields** page, click **Edit Field** on the right side of the target row. Changing the default value for a field updates the value for each existing customer record that has not overridden the default value.
+
+ :::important
+ Enabling **Is this field is required?** updates the license field to be required on all new and existing customers. If you enable **Is this field is required?**, you must either set a default value for the field or manually update each existing customer to provide a value for the field.
+ :::
+
+### Set Customer-Specific Values for Custom License Fields
+
+To set a customer-specific value for a custom license field:
+
+1. Log in to the Vendor Portal and select the application.
+1. Click **Customers**.
+1. For the target customer, click the **Manage customer** button.
+1. Under **Custom fields**, enter values for the target custom license fields for the customer.
+
+ :::note
+ The maximum size for a license field value is 64KB.
+ :::
+
+
+
+ [View a larger version of this image](/images/customer-license-custom-fields.png)
+
+### Delete Custom License Fields
+
+Deleted license fields and their values do not appear in the customer's license in any location, including your view in the Vendor Portal, the downloaded YAML version of the license, and the Admin Console **License** screen.
+
+By default, deleting a custom license field also deletes all of the values associated with the field in each customer record.
+
+Only administrators can delete license fields.
+
+:::important
+Replicated recommends that you take care when deleting license fields.
+
+Outages can occur for existing deployments if your application or the Admin Console **Config** page expect a license file to provide a required value.
+:::
+
+To delete a custom license field:
+
+1. Log in to the Vendor Portal and select the application.
+1. On the **License Fields** page, click **Edit Field** on the right side of the target row.
+1. Click **Delete** on the bottom left of the dialog.
+1. (Optional) Enable **Preserve License Values** to save values for the license field that were not set by the default in each customer record. Preserved license values are not visible to you or the customer.
+
+ :::note
+ If you enable **Preserve License Values**, you can create a new field with the same name and `type` as the deleted field to reinstate the preserved values.
+ :::
+
+1. Follow the instructions in the dialog and click **Delete**.
\ No newline at end of file
diff --git a/docs/reference/licenses-download.md b/docs/reference/licenses-download.md
new file mode 100644
index 0000000000..0354d06dc9
--- /dev/null
+++ b/docs/reference/licenses-download.md
@@ -0,0 +1,28 @@
+import AirGapLicenseDownload from "../partials/install/_airgap-license-download.mdx"
+
+# Downloading Customer Licenses
+
+This topic describes how to download a license file from the Replicated Vendor Portal.
+
+For information about how to download customer licenses with the Vendor API v3, see [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense) in the Vendor API v3 documentation.
+
+## Download Licenses
+
+You can download license files for your customers from the **Customer** page in the Vendor Portal.
+
+To download a license:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), go to the **Customers** page.
+1. In the row for the target customer, click the **Download License** button.
+
+ 
+
+ [View a larger version of this image](/images/download-license-button.png)
+
+## Enable and Download Air Gap Licenses {#air-gap-license}
+
+The **Airgap Download Enabled** license option allows KOTS to install an application without outbound internet access using the `.airgap` bundle.
+
+To enable the air gap entitlement and download the license:
+
+| Install Type | +Description | +Requirements | +
|---|---|---|
| Existing Cluster (Helm CLI) | +Allows the customer to install with Helm in an existing cluster. The customer does not have access to the Replicated installers (Embedded Cluster, KOTS, and kURL). When the Helm CLI Air Gap Instructions (Helm CLI only) install option is also enabled, the Download Portal displays instructions on how to pull Helm installable images into a local repository. See Understanding Additional Install Options below. |
+
+ The latest release promoted to the channel where the customer is assigned must contain one or more Helm charts. It can also include Replicated custom resources, such as the Embedded Cluster Config custom resource, the KOTS HelmChart, Config, and Application custom resources, or the Troubleshoot Preflight and SupportBundle custom resources. + |
+
| Existing Cluster (KOTS install) | +Allows the customer to install with Replicated KOTS in an existing cluster. | +
+
|
+
| kURL Embedded Cluster (first generation product) | +
+ Allows the customer to install with Replicated kURL on a VM or bare metal server. +Note: For new installations, enable Replicated Embedded Cluster (current generation product) instead of Replicated kURL (first generation product). + |
+
+
|
+
| Embedded Cluster (current generation product) | +Allows the customer to install with Replicated Embedded Cluster on a VM or bare metal server. | +
+
|
+
| Install Type | +Description | +Requirements | +
|---|---|---|
| Helm CLI Air Gap Instructions (Helm CLI only) | +When enabled, a customer will see instructions on the Download Portal on how to pull Helm installable images into their local repository. Helm CLI Air Gap Instructions is enabled by default when you select the Existing Cluster (Helm CLI) install type. For more information see [Installing with Helm in Air Gap Environments](/vendor/helm-install-airgap) |
+ The Existing Cluster (Helm CLI) install type must be enabled | +
| Air Gap Installation Option (Replicated Installers only) | +When enabled, new installations with this license have an option in their Download Portal to install from an air gap package or do a traditional online installation. |
+
+ At least one of the following Replicated install types must be enabled: +
|
+
| Field Name | +Description | +
| `appSlug` | +The unique application slug that the customer is associated with. This value never changes. | +
| `channelID` | +The ID of the channel where the customer is assigned. When the customer's assigned channel changes, the latest release from that channel will be downloaded on the next update check. | +
| `channelName` | +The name of the channel where the customer is assigned. When the customer's assigned channel changes, the latest release from that channel will be downloaded on the next update check. | +
| `licenseID`, `licenseId` | +Unique ID for the installed license. This value never changes. | +
| `customerEmail` | +The customer email address. | +
| `endpoint` | +For applications installed with a Replicated installer (KOTS, kURL, Embedded Cluster), this is the endpoint that the KOTS Admin Console uses to synchronize the licenses and download updates. This is typically `https://replicated.app`. | +
| `entitlementValues` | +Includes both the built-in `expires_at` field and any custom license fields. For more information about adding custom license fields, see [Managing Customer License Fields](licenses-adding-custom-fields). | +
| `expires_at` | +Defines the expiration date for the license. The date is encoded in ISO 8601 format (`2026-01-23T00:00:00Z`) and is set to midnight UTC on the date selected. If a license does not expire, this field is missing. For information about the default behavior when a license expires, see [License Expiration Handling](licenses-about#expiration) in _About Customers_. |
+
| `licenseSequence` | +Every time a license is updated, its sequence number is incremented. This value represents the license sequence that the client currently has. | +
| `customerName` | +The name of the customer. | +
| `signature` | +The base64-encoded license signature. This value will change when the license is updated. | +
| `licenseType` | +A string value that describes the type of the license, which is one of the following: `paid`, `trial`, `dev`, `single-tenant-vendor-managed` or `community`. For more information about license types, see [Managing License Type](licenses-about-types). | +
| Field Name | +Description | +
| `isEmbeddedClusterDownloadEnabled` | +If a license supports installation with Replicated Embedded Cluster, this field is set to `true` or missing. If Embedded Cluster installations are not supported, this field is `false`. This field requires that the vendor has the Embedded Cluster entitlement and that the release at the head of the channel includes an [Embedded Cluster Config](/reference/embedded-config) custom resource. This field also requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
+
| `isHelmInstallEnabled` | +If a license supports Helm installations, this field is set to `true` or missing. If Helm installations are not supported, this field is set to `false`. This field requires that the vendor packages the application as Helm charts and, optionally, Replicated custom resources. This field requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
+
| `isKotsInstallEnabled` | +If a license supports installation with Replicated KOTS, this field is set to `true`. If KOTS installations are not supported, this field is either `false` or missing. This field requires that the vendor has the KOTS entitlement. |
+
| `isKurlInstallEnabled` | +If a license supports installation with Replicated kURL, this field is set to `true` or missing. If kURL installations are not supported, this field is `false`. This field requires that the vendor has the kURL entitlement and a promoted kURL installer spec. This field also requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
+
| Field Name | +Description | +
| `isAirgapSupported` | +If a license supports air gap installations with the Replicated installers (KOTS, kURL, Embedded Cluster), then this field is set to `true`. If Replicated installer air gap installations are not supported, this field is missing. When you enable this field for a license, the `license.yaml` file will have license metadata embedded in it and must be re-downloaded. |
+
| `isHelmAirgapEnabled` | +If a license supports Helm air gap installations, then this field is set to `true` or missing. If Helm air gap is not supported, this field is missing. When you enable this feature, the `license.yaml` file will have license metadata embedded in it and must be re-downloaded. This field requires that the "Install Types" feature is enabled for your Vendor Portal team. Reach out to your Replicated account representative to get access. |
+
| Field Name | +Description | +
| `isDisasterRecoverySupported` | +If a license supports the Embedded Cluster disaster recovery feature, this field is set to `true`. If a license does not support disaster recovery for Embedded Cluster, this field is either missing or `false`. **Note**: Embedded Cluster Disaster Recovery is in Alpha. To get access to this feature, reach out to Alex Parker at [alexp@replicated.com](mailto:alexp@replicated.com). For more information, see [Disaster Recovery for Embedded Cluster](/vendor/embedded-disaster-recovery). | +
| `isGeoaxisSupported` | +(kURL Only) If a license supports integration with GeoAxis, this field is set to `true`. If GeoAxis is not supported, this field is either `false` or missing. **Note**: This field requires that the vendor has the GeoAxis entitlement. It also requires that the vendor has access to the Identity Service entitlement. | +
| `isGitOpsSupported` | +|
| `isIdentityServiceSupported` | +If a license supports identity-service enablement for the Admin Console, this field is set to `true`. If identity service is not supported, this field is either `false` or missing. **Note**: This field requires that the vendor have access to the Identity Service entitlement. | +
| `isSemverRequired` | +If set to `true`, this field requires that the Admin Console orders releases according to Semantic Versioning. This field is controlled at the channel level. For more information about enabling Semantic Versioning on a channel, see [Semantic Versioning](releases-about#semantic-versioning) in _About Releases_. | +
| `isSnapshotSupported` | +If a license supports the snapshots backup and restore feature, this field is set to `true`. If a license does not support snapshots, this field is either missing or `false`. **Note**: This field requires that the vendor have access to the Snapshots entitlement. | +
| `isSupportBundleUploadSupported` | +If a license supports uploading a support bundle in the Admin Console, this field is set to `true`. If a license does not support uploading a support bundle, this field is either missing or `false`. | +
| Description | -Notifies if any manifest file has allowPrivilegeEscalation set to true. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -- Requires an application icon. - | -
|---|---|
| Level | -Warn | -
| Applies To | -
- Files with kind: Application and apiVersion: kots.io/v1beta1.
- |
-
| Example | -Example of correct YAML for this rule: |
-
| Description | -
- Requires an Application custom resource manifest file. -Accepted value for |
-
|---|---|
| Level | -Warn | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -
- Requires statusInformers.
- |
-
|---|---|
| Level | -Warn | -
| Applies To | -
- Files with kind: Application and apiVersion: kots.io/v1beta1.
- |
-
| Example | -Example of correct YAML for this rule: |
-
| Description | -
- Enforces valid types for Config items. -For more information, see Items in Config. - |
-
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Example | -
| Description | -Enforces that all ConfigOption items do not reference themselves. | -
|---|---|
| Level | -Error | -
| Applies To | -
- Files with kind: Config and apiVersion: kots.io/v1beta1.
- |
-
| Example | - |
-
| Description | -
- Requires all ConfigOption items to be defined in the Config custom resource manifest file.
- |
-
|---|---|
| Level | -Warn | -
| Applies To | -All files | -
| Description | -- Enforces that sub-templated ConfigOption items must be repeatable. - | -
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Description | -
- Requires ConfigOption items with any of the following names to have
|
-
|---|---|
| Level | -Warn | -
| Applies To | -All files | -
| Example | -Example of correct YAML for this rule: |
-
| Description | -
- Enforces valid For more information, see when in Config. - |
-
|---|---|
| Level | -Error | -
| Applies To | -Files with kind: Config and apiVersion: kots.io/v1beta1. |
-
| Description | -
- Enforces valid RE2 regular expressions pattern when regex validation is present. -For more information, see Validation in Config. - |
-
|---|---|
| Level | -Error | -
| Applies To | -Files with kind: Config and apiVersion: kots.io/v1beta1. |
-
| Example | -
| Description | -
- Enforces valid item type when regex validation is present. -Item type should be For more information, see Validation in Config. - |
-
|---|---|
| Level | -Error | -
| Applies To | -Files with kind: Config and apiVersion: kots.io/v1beta1. |
-
| Example | -
| Description | -
- Requires a Config custom resource manifest file. -Accepted value for Accepted value for |
-
|---|---|
| Level | -Warn | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if any manifest file has a container image tag appended with
- :latest. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Disallows any manifest file having a container image tag that includes LocalImageName. |
-
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.container has no resources.limits field. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.container has no resources.requests field. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a manifest file has no resources field. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -
- Disallows using the deprecated kURL installer
|
-
|---|---|
| Level | -Warn | -
| Applies To | -
- Files with kind: Installer and apiVersion: kurl.sh/v1beta1.
- |
-
| Example | -
| Description | -
- Enforces unique |
-
|---|---|
| Level | -Error | -
| Applies To | -
- Files with kind: HelmChart and apiVersion: kots.io/v1beta1.
- |
-
| Description | -
- Disallows duplicate Replicated custom resources.
- A release can only include one of each This rule disallows inclusion of more than one file with: -
|
-
|---|---|
| Level | -Error | -
| Applies To | -- All files - | -
| Description | -
- Notifies if any manifest file has a Replicated strongly recommends not specifying a namespace to allow - for flexibility when deploying into end user environments. -For more information, see Managing Application Namespaces. - |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Requires that a |
-
|---|---|
| Level | -Error | -
| Applies To | -
- Releases with a HelmChart custom resource manifest file containing kind: HelmChart and apiVersion: kots.io/v1beta1.
- |
-
| Description | -Enforces that a HelmChart custom resource manifest file with |
-
|---|---|
| Level | -Error | -
| Applies To | -
- Releases with a *.tar.gz archive file present.
- |
-
| Description | -
- Enforces valid
|
-
|---|---|
| Level | -Warn | -
| Applies To | -
- Files with kind: HelmChart and apiVersion: kots.io/v1beta1.
- |
-
| Example | -Example of correct YAML for this rule: |
-
| Description | -
- Enforces valid Replicated kURL add-on versions. -kURL add-ons included in the kURL installer must pin specific versions rather than |
-
|---|---|
| Level | -Error | -
| Applies To | -
- Files with
|
-
| Example | -
| Description | -
- Requires Accepts a |
-
|---|---|
| Level | -Error | -
| Applies To | -
- Files with kind: Application and apiVersion: kots.io/v1beta1.
- |
-
| Example | -Example of correct YAML for this rule: |
-
| Description | -Enforces valid YAML after rendering the manifests using the Config spec. |
-
|---|---|
| Level | -Error | -
| Applies To | -- YAML files - | -
| Example | -
| Description | -
- Requires Accepts a |
-
|---|---|
| Level | -Error | -
| Applies To | -
- Files with kind: Application and apiVersion: kots.io/v1beta1
- |
-
| Example | -Example of correct YAML for this rule: |
-
| Description | -Requires that the value of a property matches that property's expected type. |
-
|---|---|
| Level | -Error | -
| Applies To | -- All files - | -
| Example | -
| Description | -Enforces valid YAML. |
-
|---|---|
| Level | -Error | -
| Applies To | -- YAML files - | -
| Example | -
| Description | -Notifies if any manifest file may contain secrets. | -
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Requires the apiVersion: field in all files. |
-
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Example | -Example of correct YAML for this rule: |
-
| Description | -Requires the kind: field in all files. |
-
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Example | -Example of correct YAML for this rule: |
-
| Description | -
- Requires that each The linter cannot evaluate If you configure status informers for Helm-managed resources, you can ignore |
-
|---|---|
| Level | -Warning | -
| Applies To | -
- Compares |
-
| Description | -
- Requires a Preflight custom resource manifest file with: -
and one of the following: -
|
-
|---|---|
| Level | -Warn | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if any manifest file has privileged set to true. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -
- Enforces ConfigOption For more information, see Repeatable Item Template Targets in Config. - |
-
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Example | -Example of correct YAML for this rule: |
-
| Description | -
- Disallows repeating Config item with undefined For more information, see Repeatable Item Template Targets in Config. - |
-
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Example | -Example of correct YAML for this rule: |
-
| Description | -
- Disallows repeating Config item with undefined For more information, see Repeatable Items in Config. - |
-
|---|---|
| Level | -Error | -
| Applies To | -All files | -
| Example | -Example of correct YAML for this rule: |
-
| Description | -Notifies if any manifest file has replicas set to 1. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.container has no resources.limits.cpu field. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.container has no resources.limits.memory field. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.container has no resources.requests.cpu field. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.container has no resources.requests.memory field. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -
- Requires a Troubleshoot manifest file. -Accepted values for
Accepted values for
|
-
|---|---|
| Level | -Warn | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.volumes has hostPath
- set to /var/run/docker.sock. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Description | -Notifies if a spec.volumes has defined a hostPath. |
-
|---|---|
| Level | -Info | -
| Applies To | -All files | -
| Example | -Example of matching YAML for this rule: |
-
| Method | +Description | +
|---|---|
| Promote the installer to a channel | +The installer is promoted to one or more channels. All releases on the channel use the kURL installer that is currently promoted to that channel. There can be only one active kURL installer on each channel at a time. The benefit of promoting an installer to one or more channels is that you can create a single installer without needing to add a separate installer for each release. However, because all the releases on the channel will use the same installer, problems can occur if all releases are not tested with the given installer. |
+
| Include the installer in a release (Beta) | +The installer is included as a manifest file in a release. This makes it easier to test the installer and release together. It also makes it easier to know which installer spec customers are using based on the application version that they have installed. |
+
+
+ [View a larger version of this image](/images/kurl-installers-page.png)
+
+1. Edit the file to customize the installer. For guidance on which add-ons to choose, see [Requirements and Recommendations](#requirements-and-recommendations) below.
+
+ You can also go to the landing page at [kurl.sh](https://kurl.sh/) to build an installer then copy the provided YAML:
+
+
+
+ [View a larger version of this image](/images/kurl-build-an-installer.png)
+
+1. Click **Save installer**. You can continue to edit your file until it is promoted.
+
+1. Click **Promote**. In the **Promote Installer** dialog that opens, edit the fields:
+
+
+
+ [View a larger version of this image](/images/promote-installer.png)
+
+ | Field | +Description | +
|---|---|
| Channel | +Select the channel or channels where you want to promote the installer. | +
| Version label | +Enter a version label for the installer. | +
+
+ [View a larger version of this image](/images/kurl-build-an-installer.png)
+
+1. Click **Save**. This saves a draft that you can continue to edit until you promote it.
+
+1. Click **Promote**.
+
+ To make changes after promoting, create a new release.
+
+## kURL Add-on Requirements and Recommendations {#requirements-and-recommendations}
+
+KURL includes several add-ons for networking, storage, ingress, and more. The add-ons that you choose depend on the requirements for KOTS and the unique requirements for your application. For more information about each add-on, see the open source [kURL documentation](https://kurl.sh/docs/introduction/).
+
+When creating a kURL installer, consider the following requirements and guidelines for kURL add-ons:
+
+- You must include the KOTS add-on to support installation with KOTS and provision the KOTS Admin Console. See [KOTS add-on](https://kurl.sh/docs/add-ons/kotsadm) in the kURL documentation.
+
+- To support the use of KOTS snapshots, Velero must be installed in the cluster. Replicated recommends that you include the Velero add-on in your kURL installer so that your customers do not have to manually install Velero.
+
+ :::note
+ During installation, the Velero add-on automatically deploys internal storage for backups. The Velero add-on requires the MinIO or Rook add-on to deploy this internal storage. If you include the Velero add-on without either the MinIO add-on or the Rook add-on, installation fails with the following error message: `Only Rook and Longhorn are supported for Velero Internal backup storage`.
+ :::
+
+- You must select storage add-ons based on the KOTS requirements and the unique requirements for your application. For more information, see [About Selecting Storage Add-ons](packaging-installer-storage).
+
+- kURL installers that are included in releases must pin specific add-on versions and cannot pin `latest` versions or x-ranges (such as 1.2.x). Pinning specific versions ensures the most testable and reproducible installations. For example, pin `Kubernetes 1.23.0` in your manifest to ensure that version 1.23.0 of Kubernetes is installed. For more information about pinning Kubernetes versions, see [Versions](https://kurl.sh/docs/create-installer/#versions) and [Versioned Releases](https://kurl.sh/docs/install-with-kurl/#versioned-releases) in the kURL open source documentation.
+
+ :::note
+ For kURL installers that are _not_ included in a release, pinning specific versions of Kubernetes and Kubernetes add-ons in the kURL installer manifest is not required, though is highly recommended.
+ :::
+
+- After you configure a kURL installer, Replicated recommends that you customize host preflight checks to support the installation experience with kURL. Host preflight checks help ensure successful installation and the ongoing health of the cluster. For more information about customizing host preflight checks, see [Customizing Host Preflight Checks for Kubernetes Installers](preflight-host-preflights).
+
+- For installers included in a release, Replicated recommends that you define a preflight check in the release to ensure that the target kURL installer is deployed before the release is installed. For more information about how to define preflight checks, see [Defining Preflight Checks](preflight-defining).
+
+ For example, the following preflight check uses the `yamlCompare` analyzer with the `kots.io/installer: "true"` annotation to compare the target kURL installer that is included in the release against the kURL installer that is currently deployed in the customer's environment. For more information about the `yamlCompare` analyzer, see [`yamlCompare`](https://troubleshoot.sh/docs/analyze/yaml-compare/) in the open source Troubleshoot documentation.
+
+ ```yaml
+ apiVersion: troubleshoot.sh/v1beta2
+ kind: Preflight
+ metadata:
+ name: installer-preflight-example
+ spec:
+ analyzers:
+ - yamlCompare:
+ annotations:
+ kots.io/installer: "true"
+ checkName: Kubernetes Installer
+ outcomes:
+ - fail:
+ message: The kURL installer for this version differs from what you have installed. It is recommended that you run the updated kURL installer before deploying this version.
+ uri: https://kurl.sh/my-application
+ - pass:
+ message: The kURL installer for this version matches what is currently installed.
+ ```
+
+
\ No newline at end of file
diff --git a/docs/reference/packaging-include-resources.md b/docs/reference/packaging-include-resources.md
new file mode 100644
index 0000000000..50938c7b63
--- /dev/null
+++ b/docs/reference/packaging-include-resources.md
@@ -0,0 +1,126 @@
+# Conditionally Including or Excluding Resources
+
+This topic describes how to include or exclude optional application resources based on one or more conditional statements. The information in this topic applies to Helm chart- and standard manifest-based applications.
+
+## Overview
+
+Software vendors often need a way to conditionally deploy resources for an application depending on users' configuration choices. For example, a common use case is giving the user the choice to use an external database or an embedded database. In this scenario, when a user chooses to use their own external database, it is not desirable to deploy the embedded database resources.
+
+There are different options for creating conditional statements to include or exclude resources based on the application type (Helm chart- or standard manifest-based) and the installation method (Replicated KOTS or Helm CLI).
+
+### About Replicated Template Functions
+
+For applications deployed with KOTS, Replicated template functions are available for creating the conditional statements that control which optional resources are deployed for a given user. Replicated template functions can be used in standard manifest files such as Replicated custom resources or Kubernetes resources like StatefulSets, Secrets, and Services.
+
+For example, the Replicated ConfigOptionEquals template functions returns true if the specified configuration option value is equal to a supplied value. This is useful for creating conditional statements that include or exclude a resource based on a user's application configuration choices.
+
+For more information about the available Replicated template functions, see [About Template Functions](/reference/template-functions-about).
+
+## Include or Exclude Helm Charts
+
+This section describes methods for including or excluding Helm charts from your application deployment.
+
+### Helm Optional Dependencies
+
+Helm supports adding a `condition` field to dependencies in the Helm chart `Chart.yaml` file to include subcharts based on one or more boolean values evaluating to true.
+
+For more information about working with dependencies and defining optional dependencies for Helm charts, see [Dependencies](https://helm.sh/docs/chart_best_practices/dependencies/) in the Helm documentation.
+
+### HelmChart `exclude` Field
+
+For Helm chart-based applications installed with KOTS, you can configure KOTS to exclude certain Helm charts from deployment using the HelmChart custom resource [`exclude`](/reference/custom-resource-helmchart#exclude) field. When the `exclude` field is set to a conditional statement, KOTS excludes the chart if the condition evaluates to `true`.
+
+The following example uses the `exclude` field and the ConfigOptionEquals template function to exclude a postgresql Helm chart when the `external_postgres` option is selected on the Replicated Admin Console **Config** page:
+
+```yaml
+apiVersion: kots.io/v1beta2
+kind: HelmChart
+metadata:
+ name: postgresql
+spec:
+ exclude: 'repl{{ ConfigOptionEquals `postgres_type` `external_postgres` }}'
+ chart:
+ name: postgresql
+ chartVersion: 12.1.7
+ releaseName: samplechart-release-1
+```
+
+## Include or Exclude Standard Manifests
+
+For standard manifest-based applications installed with KOTS, you can use the `kots.io/exclude` or `kots.io/when` annotations to include or exclude resources based on a conditional statement.
+
+By default, if neither `kots.io/exclude` nor `kots.io/when` is present on a resource, the resource is included.
+
+### Requirements
+
+The `kots.io/exclude` and `kots.io/when` annotations have the following requirements:
+
+* Only one of the `kots.io/exclude` nor `kots.io/when` annotations can be present on a single resource. If both are present, the `kots.io/exclude` annotation is applied, and the `kots.io/when` annotation is ignored.
+
+* The values of the `kots.io/exclude` and `kots.io/when` annotations must be wrapped in quotes. This is because Kubernetes annotations must be strings. For more information about working with Kubernetes annotations, see [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) in the Kubernetes documentation.
+
+### `kots.io/exclude`
+
+When the `kots.io/exclude: '
+
+ [View a larger version of this image](/images/add-external-registry.png)
+
+1. In the **Provider** drop-down, select your registry provider.
+
+1. Complete the fields in the dialog, depending on the provider that you chose:
+
+ :::note
+ Replicated stores your credentials encrypted and securely. Your credentials and the encryption key do not leave Replicated servers.
+ :::
+
+ * **Amazon ECR**
+ | Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry, such as 123456689.dkr.ecr.us-east-1.amazonaws.com | +
| Access Key ID | +Enter the Access Key ID for a Service Account User that has pull access to the registry. See Setting up the Service Account User. | +
| Secret Access Key | +Enter the Secret Access Key for the Service Account User. | +
| Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry, such as index.docker.io. | +
| Auth Type | +Select the authentication type for a DockerHub account that has pull access to the registry. | +
| Username | +Enter the host name for the account. | +
| Password or Token | +Enter the password or token for the account, depending on the authentication type you selected. | +
| Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry. | +
| Username | +Enter the username for an account that has pull access to the registry. | +
| GitHub Token | +Enter the token for the account. | +
| Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry, such as us-east1-docker.pkg.dev |
+
| Auth Type | +Select the authentication type for a Google Cloud Platform account that has pull access to the registry. | +
| Service Account JSON Key or Token | +
+ Enter the JSON Key from Google Cloud Platform assigned with the Artifact Registry Reader role, or token for the account, depending on the authentication type you selected. +For more information about creating a Service Account, see Access Control with IAM in the Google Cloud documentation. + |
+
| Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry, such as gcr.io. | +
| Service Account JSON Key | +Enter the JSON Key for a Service Account in Google Cloud Platform that is assigned the Storage Object Viewer role. For more information about creating a Service Account, see Access Control with IAM in the Google Cloud documentation. |
+
| Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry, such as quay.io. | +
| Username and Password | +Enter the username and password for an account that has pull access to the registry. | +
| Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry, such as nexus.example.net. | +
| Username and Password | +Enter the username and password for an account that has pull access to the registry. | +
| Field | +Instructions | +
|---|---|
| Hostname | +Enter the host name for the registry, such as example.registry.com. | +
| Username and Password | +Enter the username and password for an account that has pull access to the registry. | +
| Product Phase | +Definition | +
|---|---|
| Alpha | +A product or feature that is exploratory or experimental. Typically, access to alpha features and their documentation is limited to customers providing early feedback. While most alpha features progress to beta and general availability (GA), some are deprecated based on assessment learnings. | +
| Beta | +A product or feature that is typically production-ready, but has not met Replicated’s definition of GA for one or more of the following reasons:
Documentation for beta products and features is published on the Replicated Documentation site with a "(Beta)" label. Beta products or features follow the same build and test processes required for GA. Please contact your Replicated account representative if you have questions about why a product or feature is beta. |
+
| “GA” - General Availability | +A product or feature that has been validated as both production-ready and value-additive by a percentage of Replicated customers. Products in the GA phase are typically those that are available for purchase from Replicated. | +
| “LA” - Limited Availability | +A product has reached the Limited Availability phase when it is no longer available for new purchases from Replicated. Updates will be primarily limited to security patches, critical bugs and features that enable migration to GA products. | +
| “EOA” - End of Availability | +A product has reached the End of Availability phase when it is no longer available for renewal purchase by existing customers. This date may coincide with the Limited Availability phase. This product is considered deprecated, and will move to End of Life after a determined support window. Product maintenance is limited to critical security issues only. |
+
| “EOL” - End of Life | +A product has reached its End of Life, and will no longer be supported, patched, or fixed by Replicated. Associated product documentation may no longer be available. The Replicated team will continue to engage to migrate end customers to GA product based deployments of your application. |
+
| Replicated Product | +Product Phase | +End of Availability | +End of Life | +
|---|---|---|---|
| Compatibility Matrix | +GA | +N/A | +N/A | +
| Replicated SDK | +Beta | +N/A | +N/A | +
| Replicated KOTS Installer | +GA | +N/A | +N/A | +
| Replicated kURL Installer | +GA | +N/A | +N/A | +
| Replicated Embedded Cluster Installer | +GA | +N/A | +N/A | +
| Replicated Classic Native Installer | +EOL | +2023-12-31* | +2024-12-31* | +
| Kubernetes Version | +Embedded Cluster Versions | +KOTS Versions | +kURL Versions | +End of Replicated Support | +
|---|---|---|---|---|
| 1.32 | +N/A | +N/A | +N/A | +2026-02-28 | +
| 1.31 | +N/A | +1.117.0 and later | +v2024.08.26-0 and later | +2025-10-28 | +
| 1.30 | +1.16.0 and later | +1.109.1 and later | +v2024.05.03-0 and later | +2025-06-28 | +
| 1.29 | +1.0.0 and later | +1.105.2 and later | +v2024.01.02-0 and later | +2025-02-28 | +
The following shows how the pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ The following shows how the warn outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ The following shows how the pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ This example uses Helm template functions to render the credentials and connection details for the MySQL server that were supplied by the user. Additionally, it uses Helm template functions to create a conditional statement so that the MySQL collector and analyzer are included in the preflight checks only when MySQL is deployed, as indicated by a .Values.global.mysql.enabled field evaluating to true.
For more information about using Helm template functions to access values from the values file, see Values Files.
+This example uses KOTS template functions in the Config context to render the credentials and connection details for the MySQL server that were supplied by the user in the Replicated Admin Console Config page. Replicated recommends using a template function for the URI, as shown above, to avoid exposing sensitive information. For more information about template functions, see About Template Functions.
+This example also uses an analyzer with strict: true, which prevents installation from continuing if the preflight check fails.
The following shows how a fail outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade when strict: true is set for the analyzer:
+ View a larger version of this image
+ The following shows how a warn outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ The following shows how a fail outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ The following shows how a pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ The following shows how the fail outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ The following shows how the pass outcome for this preflight check is displayed in the Admin Console during KOTS installation or upgrade:
+ View a larger version of this image
+ | Flag: Value | +Description | +
|---|---|
hostPreflightIgnore: true |
+ Ignores host preflight failures and warnings. The installation proceeds regardless of host preflight outcomes. | +
hostPreflightEnforceWarnings: true |
+ Blocks an installation if the results include a warning. | +
+
+ [View a larger version of this image](/images/helm-install-preflights.png)
+
+1. Run the commands provided in the dialog:
+
+ 1. Run the first command to log in to the Replicated registry:
+
+ ```
+ helm registry login registry.replicated.com --username USERNAME --password PASSWORD
+ ```
+
+ Where:
+ - `USERNAME` is the customer's email address.
+ - `PASSWORD` is the customer's license ID.
+
+ **Example:**
+ ```
+ helm registry login registry.replicated.com --username example@companyname.com password 1234abcd
+ ```
+
+ 1. Run the second command to install the kubectl plugin with krew:
+
+ ```
+ curl https://krew.sh/preflight | bash
+ ```
+
+ 1. Run the third command to run preflight checks:
+
+ ```
+ helm template oci://registry.replicated.com/APP_SLUG/CHANNEL/CHART | kubectl preflight -
+ ```
+
+ Where:
+ - `APP_SLUG` is the name of the application.
+ - `CHANNEL` is the lowercased name of the release channel.
+ - `CHART` is the name of the Helm chart.
+
+ **Examples:**
+
+ ```
+ helm template oci://registry.replicated.com/gitea-app/unstable/gitea | kubectl preflight -
+ ```
+ ```
+ helm template oci://registry.replicated.com/gitea-app/unstable/gitea --values values.yaml | kubectl preflight -
+ ```
+
+ For all available options with this command, see [Run Preflight Checks using the CLI](https://troubleshoot.sh/docs/preflight/cli-usage/#options) in the open source Troubleshoot documentation.
+
+ 1. (Optional) Run the fourth command to install the application. For more information, see [Installing with Helm](install-with-helm).
+
+## (Optional) Save Preflight Check Results
+
+The output of the preflight plugin shows the success, warning, or fail message for each preflight, depending on how they were configured. You can ask your users to send you the results of the preflight checks if needed.
+
+To save the results of preflight checks to a `.txt` file, users can can press `s` when viewing the results from the CLI, as shown in the example below:
+
+
+
+[View a larger version of this image](/images/helm-preflight-save-output.png)
diff --git a/docs/reference/preflight-sb-helm-templates-about.md b/docs/reference/preflight-sb-helm-templates-about.md
new file mode 100644
index 0000000000..db811ffa43
--- /dev/null
+++ b/docs/reference/preflight-sb-helm-templates-about.md
@@ -0,0 +1,151 @@
+# Using Helm Templates in Specifications
+
+You can use Helm templates to configure collectors and analyzers for preflight checks and support bundles for Helm installations.
+
+Helm templates can be useful when you need to:
+
+- Run preflight checks based on certain conditions being true or false, such as the customer wants to use an external database.
+- Pull in user-specific information from the values.yaml file, such as the version a customer is using for an external database.
+
+You can also use Helm templating with the Troubleshoot template functions for the `clusterPodStatuses` analyzer. For more information, see [Helm and Troubleshoot Template Example](#troubleshoot).
+
+## Helm Template Example
+
+In the following example, the `mysql` collector is included in a preflight check if the customer does not want to use the default MariaDB. This is indicated by the template `{{- if eq .Values.global.mariadb.enabled false -}}`.
+
+This specification also takes the MySQL connection string information from the `values.yaml` file, indicated by the template `'{{ .Values.global.externalDatabase.user }}:{{ .Values.global.externalDatabase.password }}@tcp({{ .Values.global.externalDatabase.host }}:{{ .Values.global.externalDatabase.port }})/{{ .Values.global.externalDatabase.database }}?tls=false'` in the `uri` field.
+
+Additionally, the specification verifies the maximum number of nodes in the `values.yaml` file is not exceeded by including the template `'count() > {{ .Values.global.maxNodeCount }}'` for the `nodeResources` analyzer.
+
+```yaml
+{{- define "preflight.spec" }}
+apiVersion: troubleshoot.sh/v1beta2
+kind: Preflight
+metadata:
+ name: preflight-sample
+spec:
+ {{ if eq .Values.global.mariadb.enabled false }}
+ collectors:
+ - mysql:
+ collectorName: mysql
+ uri: '{{ .Values.global.externalDatabase.user }}:{{ .Values.global.externalDatabase.password }}@tcp({{ .Values.global.externalDatabase.host }}:{{ .Values.global.externalDatabase.port }})/{{ .Values.global.externalDatabase.database }}?tls=false'
+ {{ end }}
+ analyzers:
+ - nodeResources:
+ checkName: Node Count Check
+ outcomes:
+ - fail:
+ when: 'count() > {{ .Values.global.maxNodeCount }}'
+ message: "The cluster has more than {{ .Values.global.maxNodeCount }} nodes."
+ - pass:
+ message: You have the correct number of nodes.
+ - clusterVersion:
+ outcomes:
+ - fail:
+ when: "< 1.22.0"
+ message: The application requires at least Kubernetes 1.22.0, and recommends 1.23.0.
+ uri: https://kubernetes.io
+ - warn:
+ when: "< 1.23.0"
+ message: Your cluster meets the minimum version of Kubernetes, but we recommend you update to 1.23.0 or later.
+ uri: https://kubernetes.io
+ - pass:
+ message: Your cluster meets the recommended and required versions of Kubernetes.
+ {{ if eq .Values.global.mariadb.enabled false }}
+ - mysql:
+ checkName: Must be MySQL 8.x or later
+ collectorName: mysql
+ outcomes:
+ - fail:
+ when: connected == false
+ message: Cannot connect to MySQL server
+ - fail:
+ when: version < 8.x
+ message: The MySQL server must be at least version 8
+ - pass:
+ message: The MySQL server is ready
+ {{ end }}
+{{- end }}
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ labels:
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ app.kubernetes.io/version: {{ .Chart.AppVersion }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ troubleshoot.sh/kind: preflight
+ name: "{{ .Release.Name }}-preflight-config"
+stringData:
+ preflight.yaml: |
+{{- include "preflight.spec" . | indent 4 }}
+```
+
+## Helm and Troubleshoot Template Example {#troubleshoot}
+
+You can also use Helm templates with the Troubleshoot template functions to automatically add the Pod name and namespace to a message when a `clusterPodStatuses` analyzer fails. For more information about the Troubleshoot template function, see [Cluster Pod Statuses](https://troubleshoot.sh/docs/analyze/cluster-pod-statuses/) in the Troubleshoot documentation.
+
+When you add the `clusterPodStatuses` analyzer template function values (such as `{{ .Name }}`) to your Helm template, you must encapsulate the Helm template using \{\{ ` ` \}\} so that Helm does not expand it.
+
+The following example shows an analyzer that uses Troubleshoot templates and the override for Helm:
+
+```yaml
+# This is the support bundle config secret that will be used to generate the support bundle
+apiVersion: v1
+kind: Secret
+metadata:
+ labels:
+ troubleshoot.sh/kind: support-bundle
+ name: {{ .Release.Name }}-support-bundle
+ namespace: {{ .Release.Namespace }}
+type: Opaque
+stringData:
+ # This is the support bundle spec that will be used to generate the support bundle
+ # Notes: we use {{ .Release.Namespace }} to ensure that the support bundle is scoped to the release namespace
+ # We can use any of Helm's templating features here, including {{ .Values.someValue }}
+ support-bundle-spec: |
+ apiVersion: troubleshoot.sh/v1beta2
+ kind: SupportBundle
+ metadata:
+ name: support-bundle
+ spec:
+ collectors:
+ - clusterInfo: {}
+ - clusterResources: {}
+ - logs:
+ selector:
+ - app=someapp
+ namespace: {{ .Release.Namespace }}
+ analyzers:
+ - clusterPodStatuses:
+ name: unhealthy
+ namespaces:
+ - default
+ - myapp-namespace
+ outcomes:
+ - fail:
+ when: "== CrashLoopBackOff"
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is in a CrashLoopBackOff state.` }}
+ - fail:
+ when: "== ImagePullBackOff"
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is in a ImagePullBackOff state.` }}
+ - fail:
+ when: "== Pending"
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is in a Pending state.` }}
+ - fail:
+ when: "== Evicted"
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is in a Evicted state.` }}
+ - fail:
+ when: "== Terminating"
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is in a Terminating state.` }}
+ - fail:
+ when: "== Init:Error"
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is in an Init:Error state.` }}
+ - fail:
+ when: "== Init:CrashLoopBackOff"
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is in an Init:CrashLoopBackOff state.` }}
+ - fail:
+ when: "!= Healthy" # Catch all unhealthy pods. A pod is considered healthy if it has a status of Completed, or Running and all of its containers are ready.
+ message: {{ `Pod {{ .Namespace }}/{{ .Name }} is unhealthy with a status of {{ .Status.Reason }}.` }}
+```
\ No newline at end of file
diff --git a/docs/reference/preflight-support-bundle-about.mdx b/docs/reference/preflight-support-bundle-about.mdx
new file mode 100644
index 0000000000..88471659e7
--- /dev/null
+++ b/docs/reference/preflight-support-bundle-about.mdx
@@ -0,0 +1,150 @@
+import Overview from "../partials/preflights/_preflights-sb-about.mdx"
+
+# About Preflight Checks and Support Bundles
+
+This topic provides an introduction to preflight checks and support bundles, which are provided by the [Troubleshoot](https://troubleshoot.sh/) open source project.
+
+## Overview
+
+| Installation | +Support for Image Tags | +Support for Image Digests | +
|---|---|---|
| Online | +Supported by default | +Supported by default | +
| Air Gap | +Supported by default for Replicated KOTS installations | +
+ Supported for applications on KOTS v1.82.0 and later when the Enable new air gap bundle format toggle is enabled on the channel. +For more information, see Using Image Digests in Air Gap Installations below. + |
+
The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. The optionalValues field sets the specified Helm values when a given conditional statement evaluates to true. In this case, if the application is installed with Embedded Cluster, then the Gitea service type is set to `NodePort` and the node port is set to `"32000"`. This will allow Gitea to be accessed from the local machine after deployment for the purpose of this quick start.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and adds the port where the Gitea service can be accessed so that the user can open the application after installation.
The Kubernetes SIG Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the service port defined in the KOTS Application custom resource.
+To install your application with Embedded Cluster, an Embedded Cluster Config must be present in the release. At minimum, the Embedded Cluster Config sets the version of Embedded Cluster that will be installed. You can also define several characteristics about the cluster.
+
+
+ [View a larger version of this image](/images/quick-start-select-gitea-app.png)
+
+ 1. Click **Customers > Create customer**.
+
+ The **Create a new customer** page opens:
+
+ 
+
+ [View a larger version of this image](/images/create-customer.png)
+
+ 1. For **Customer name**, enter a name for the customer. For example, `Example Customer`.
+
+ 1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
+
+ 1. For **License type**, select **Development**.
+
+ 1. For **License options**, enable the following entitlements:
+ * **KOTS Install Enabled**
+ * **Embedded Cluster Enabled**
+
+ 1. Click **Save Changes**.
+
+1. Install the application with Embedded Cluster:
+
+ 1. On the page for the customer that you created, click **Install instructions > Embedded Cluster**.
+
+ 
+
+ [View a larger image](/images/customer-install-instructions-dropdown.png)
+
+ 1. On the command line, SSH onto your VM and run the commands in the **Embedded cluster install instructions** dialog to download the latest release, extract the installation assets, and install.
+
+
+
+ [View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
+
+ 1. When prompted, enter a password for accessing the Admin Console.
+
+ The installation command takes a few minutes to complete.
+
+ **Example output:**
+
+ ```bash
+ ? Enter an Admin Console password: ********
+ ? Confirm password: ********
+ ✔ Host files materialized!
+ ✔ Running host preflights
+ ✔ Node installation finished!
+ ✔ Storage is ready!
+ ✔ Embedded Cluster Operator is ready!
+ ✔ Admin Console is ready!
+ ✔ Additional components are ready!
+ Visit the Admin Console to configure and install gitea-kite: http://104.155.145.60:30000
+ ```
+
+ At this point, the cluster is provisioned and the Admin Console is deployed, but the application is not yet installed.
+
+ 1. Go to the URL provided in the output to access to the Admin Console.
+
+ 1. On the Admin Console landing page, click **Start**.
+
+ 1. On the **Secure the Admin Console** screen, review the instructions and click **Continue**. In your browser, follow the instructions that were provided on the **Secure the Admin Console** screen to bypass the warning.
+
+ 1. On the **Certificate type** screen, either select **Self-signed** to continue using the self-signed Admin Console certificate or click **Upload your own** to upload your own private key and certificacte.
+
+ By default, a self-signed TLS certificate is used to secure communication between your browser and the Admin Console. You will see a warning in your browser every time you access the Admin Console unless you upload your own certificate.
+
+ 1. On the login page, enter the Admin Console password that you created during installation and click **Log in**.
+
+ 1. On the **Configure the cluster** screen, you can view details about the VM where you installed, including its node role, status, CPU, and memory. Users can also optionally add additional nodes on this page before deploying the application. Click **Continue**.
+
+ The Admin Console dashboard opens.
+
+ 1. On the Admin Console dashboard, next to the version, click **Deploy** and then **Yes, Deploy**.
+
+ The application status changes from Missing to Unavailable while the `gitea` Deployment is being created.
+
+ 1. After a few minutes when the application status is Ready, click **Open App** to view the Gitea application in a browser.
+
+ For example:
+
+ 
+
+ [View a larger version of this image](/images/gitea-ec-ready.png)
+
+
+
+ [View a larger version of this image](/images/gitea-app.png)
+
+1. Return to the Vendor Portal and go to **Customers**. Under the name of the customer, confirm that you can see an active instance.
+
+ This instance telemetry is automatically collected and sent back to the Vendor Portal by both KOTS and the Replicated SDK. For more information, see [About Instance and Event Data](/vendor/instance-insights-event-data).
+
+1. Under **Instance ID**, click on the ID to view additional insights including the versions of Kubernetes and the Replicated SDK running in the cluster where you installed the application. For more information, see [Instance Details](/vendor/instance-insights-details).
+
+1. Create a new release that adds preflight checks to the application:
+
+ 1. In your local filesystem, go to the `gitea` directory.
+
+ 1. Create a `gitea-preflights.yaml` file in the `templates` directory:
+
+ ```
+ touch templates/gitea-preflights.yaml
+ ```
+
+ 1. In the `gitea-preflights.yaml` file, add the following YAML to create a Kubernetes Secret with a simple preflight spec:
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ labels:
+ troubleshoot.sh/kind: preflight
+ name: "{{ .Release.Name }}-preflight-config"
+ stringData:
+ preflight.yaml: |
+ apiVersion: troubleshoot.sh/v1beta2
+ kind: Preflight
+ metadata:
+ name: preflight-sample
+ spec:
+ collectors:
+ - http:
+ collectorName: slack
+ get:
+ url: https://api.slack.com/methods/api.test
+ analyzers:
+ - textAnalyze:
+ checkName: Slack Accessible
+ fileName: slack.json
+ regex: '"status": 200,'
+ outcomes:
+ - pass:
+ when: "true"
+ message: "Can access the Slack API"
+ - fail:
+ when: "false"
+ message: "Cannot access the Slack API. Check that the server can reach the internet and check [status.slack.com](https://status.slack.com)."
+ ```
+ The YAML above defines a preflight check that confirms that an HTTP request to the Slack API at `https://api.slack.com/methods/api.test` made from the cluster returns a successful response of `"status": 200,`.
+
+ 1. In the `Chart.yaml` file, increment the version to 1.0.7:
+
+ ```yaml
+ # Chart.yaml
+ version: 1.0.7
+ ```
+
+ 1. Update dependencies and package the chart to a `.tgz` chart archive:
+
+ ```bash
+ helm package -u .
+ ```
+
+ 1. Move the chart archive to the `manifests` directory:
+
+ ```bash
+ mv gitea-1.0.7.tgz manifests
+ ```
+
+ 1. In the `manifests` directory, open the KOTS HelmChart custom resource (`gitea.yaml`) and update the `chartVersion`:
+
+ ```yaml
+ # gitea.yaml KOTS HelmChart
+ chartVersion: 1.0.7
+ ```
+
+ 1. Remove the chart archive for version 1.0.6 of the Gitea chart from the `manifests` directory:
+
+ ```
+ rm gitea-1.0.6.tgz
+ ```
+
+ 1. From the `manifests` directory, create and promote a new release, setting the version label of the release to `0.0.2`:
+
+ ```bash
+ replicated release create --yaml-dir . --promote Unstable --version 0.0.2
+ ```
+ **Example output**:
+ ```bash
+ • Reading manifests from . ✓
+ • Creating Release ✓
+ • SEQUENCE: 2
+ • Promoting ✓
+ • Channel 2kvjwEj4uBaCMoTigW5xty1iiw6 successfully set to release 2
+ ```
+
+1. On your VM, update the application instance to the new version that you just promoted:
+
+ 1. In the Admin Console, go to the **Version history** tab.
+
+ The new version is displayed automatically.
+
+ 1. Click **Deploy** next to the new version.
+
+ The Embedded Cluster upgrade wizard opens.
+
+ 1. In the Embedded Cluster upgrade wizard, on the **Preflight checks** screen, note that the "Slack Accessible" preflight check that you added was successful. Click **Next: Confirm and deploy**.
+
+ 
+
+ [View a larger version of this image](/images/quick-start-ec-upgrade-wizard-preflight.png)
+
+ :::note
+ The **Config** screen in the upgrade wizard is bypassed because this release does not contain a KOTS Config custom resource. The KOTS Config custom resource is used to set up the Config screen in the KOTS Admin Console.
+ :::
+
+ 1. On the **Confirm and Deploy** page, click **Deploy**.
+
+1. Reset and reboot the VM to remove the installation:
+
+ ```bash
+ sudo ./APP_SLUG reset
+ ```
+ Where `APP_SLUG` is the unique slug for the application.
+
+ :::note
+ You can find the application slug by running `replicated app ls` on your local machine.
+ :::
+
+## Next Steps
+
+Congratulations! As part of this quick start, you:
+* Added the Replicated SDK to a Helm chart
+* Created a release with the Helm chart
+* Installed the release on a VM with Embedded Cluster
+* Viewed telemetry for the installed instance in the Vendor Portal
+* Created a new release to add preflight checks to the application
+* Updated the application from the Admin Console
+
+Now that you are familiar with the workflow of creating, installing, and updating releases, you can begin onboarding your own application to the Replicated Platform.
+
+To get started, see [Replicated Onboarding](replicated-onboarding).
+
+## Related Topics
+
+For more information about the Replicated Platform features mentioned in this quick start, see:
+
+* [About Distributing Helm Charts with KOTS](/vendor/helm-native-about)
+* [About Preflight Checks and Support Bundles](/vendor/preflight-support-bundle-about)
+* [About the Replicated SDK](/vendor/replicated-sdk-overview)
+* [Introduction to KOTS](/intro-kots)
+* [Managing Releases with the CLI](/vendor/releases-creating-cli)
+* [Packaging a Helm Chart for a Release](/vendor/helm-install-release)
+* [Using Embedded Cluster](/vendor/embedded-overview)
+
+## Related Tutorials
+
+For additional tutorials related to this quick start, see:
+
+* [Deploying a Helm Chart on a VM with Embedded Cluster](/vendor/tutorial-embedded-cluster-setup)
+* [Adding Preflight Checks to a Helm Chart](/vendor/tutorial-preflight-helm-setup)
+* [Deploying a Helm Chart with KOTS and the Helm CLI](/vendor/tutorial-kots-helm-setup)
\ No newline at end of file
diff --git a/docs/reference/releases-about.mdx b/docs/reference/releases-about.mdx
new file mode 100644
index 0000000000..5cf69f05c5
--- /dev/null
+++ b/docs/reference/releases-about.mdx
@@ -0,0 +1,254 @@
+import ChangeChannel from "../partials/customers/_change-channel.mdx"
+import RequiredReleasesLimitations from "../partials/releases/_required-releases-limitations.mdx"
+import RequiredReleasesDescription from "../partials/releases/_required-releases-description.mdx"
+import VersionLabelReqsHelm from "../partials/releases/_version-label-reqs-helm.mdx"
+
+# About Channels and Releases
+
+This topic describes channels and releases, including information about the **Releases** and **Channels** pages in the Replicated Vendor Portal.
+
+## Overview
+
+A _release_ represents a single version of your application. Each release is promoted to one or more _channels_. Channels provide a way to progress releases through the software development lifecycle: from internal testing, to sharing with early-adopters, and finally to making the release generally available.
+
+Channels also control which customers are able to install a release. You assign each customer to a channel to define the releases that the customer can access. For example, a customer assigned to the Stable channel can only install releases that are promoted to the Stable channel, and cannot see any releases promoted to other channels. For more information about assigning customers to channels, see [Channel Assignment](licenses-about#channel-assignment) in _About Customers_.
+
+Using channels and releases helps you distribute versions of your application to the right customer segments, without needing to manage different release workflows.
+
+You can manage channels and releases with the Vendor Portal, the Replicated CLI, or the Vendor API v3. For more information about creating and managing releases or channels, see [Managing Releases with the Vendor Portal](releases-creating-releases) or [Creating and Editing Channels](releases-creating-channels).
+
+## About Channels
+
+This section provides additional information about channels, including details about the default channels in the Vendor Portal and channel settings.
+
+### Unstable, Beta, and Stable Channels
+
+Replicated includes the following channels by default:
+
+* **Unstable**: The Unstable channel is designed for internal testing and development. You can create and assign an internal test customer to the Unstable channel to install in a development environment. Replicated recommends that you do not license any of your external users against the Unstable channel.
+* **Beta**: The Beta channel is designed for release candidates and early-adopting customers. Replicated recommends that you promote a release to the Beta channel after it has passed automated testing in the Unstable channel. You can also choose to license early-adopting customers against this channel.
+* **Stable**: The Stable channel is designed for releases that are generally available. Replicated recommends that you assign most of your customers to the Stable channel. Customers licensed against the Stable channel only receive application updates when you promote a new release to the Stable channel.
+
+You can archive or edit any of the default channels, and create new channels. For more information, see [Creating and Editing Channels](releases-creating-channels).
+
+### Settings
+
+Each channel has settings. You can customize the settings for a channel to control some of the behavior of releases promoted to the channel.
+
+The following shows the **Channel Settings** dialog, accessed by clicking the settings icon on a channel:
+
+
+
+[View a larger version of this image](/images/channel-settings.png)
+
+The following describes each of the channel settings:
+
+* **Channel name**: The name of the channel. You can change the channel name at any time. Each channel also has a unique ID listed below the channel name.
+* **Description**: Optionally, add a description of the channel.
+* **Set this channel to default**: When enabled, sets the channel as the default channel. The default channel cannot be archived.
+* **Custom domains**: Select the customer-facing domains that releases promoted to this channel use for the Replicated registry, Replicated proxy registry, Replicated app service, or Replicated Download Portal endpoints. If a default custom domain exists for any of these endpoints, choosing a different domain in the channel settings overrides the default. If no custom domains are configured for an endpoint, the drop-down for the endpoint is disabled.
+
+ For more information about configuring custom domains and assigning default domains, see [Using Custom Domains](custom-domains-using).
+* The following channel settings apply only to applications that support KOTS:
+ * **Automatically create airgap builds for newly promoted releases in this channel**: When enabled, the Vendor Portal automatically builds an air gap bundle when a new release is promoted to the channel. When disabled, you can generate an air gap bundle manually for a release on the **Release History** page for the channel.
+ * **Enable semantic versioning**: When enabled, the Vendor Portal verifies that the version label for any releases promoted to the channel uses a valid semantic version. For more information, see [Semantic Versioning](releases-about#semantic-versioning) in _About Releases_.
+ * **Enable new airgap bundle format**: When enabled, air gap bundles built for releases promoted to the channel use a format that supports image digests. This air gap bundle format also ensures that identical image layers are not duplicated, resulting in a smaller air gap bundle size. For more information, see [Using Image Digests in Air Gap Installations](private-images-tags-digests#digests-air-gap) in _Using Image Tags and Digests_.
+
+ :::note
+ The new air gap bundle format is supported for applications installed with KOTS v1.82.0 or later.
+ :::
+
+## About Releases
+
+This section provides additional information about releases, including details about release promotion, properties, sequencing, and versioning.
+
+### Release Files
+
+A release contains your application files as well as the manifests required to install the application with the Replicated installers ([Replicated Embedded Cluster](/vendor/embedded-overview) and [Replicated KOTS](../intro-kots)).
+
+The application files in releases can be Helm charts and/or Kubernetes manifests. Replicated strongly recommends that all applications are packaged as Helm charts because many enterprise customers will expect to be able to install with Helm.
+
+### Promotion
+
+Each release is promoted to one or more channels. While you are developing and testing releases, Replicated recommends promoting to a channel that does not have any real customers assigned, such as the default Unstable channel. When the release is ready to be shared externally with customers, you can then promote to a channel that has the target customers assigned, such as the Beta or Stable channel.
+
+A release cannot be edited after it is promoted to a channel. This means that you can test a release on an internal development channel, and know with confidence that the same release will be available to your customers when you promote it to a channel where real customers are assigned.
+
+### Properties
+
+Each release has properties. You define release properties when you promote a release to a channel. You can edit release properties at any time from the channel **Release History** page in the Vendor Portal. For more information, see [Edit Release Properties](releases-creating-releases#edit-release-properties) in _Managing Releases with the Vendor Portal_.
+
+The following shows an example of the release properties dialog:
+
+
+
+[View a larger version of this image](/images/release-properties.png)
+
+As shown in the screenshot above, the release has the following properties:
+
+* **Version label**: The version label for the release. Version labels have the following requirements:
+
+ * If semantic versioning is enabled for the channel, you must use a valid semantic version. For more information, see [Semantic Versioning](#semantic-versioning).
+
+
+
+[View a larger version of this image](/images/release-sequences.png)
+
+#### Instance Sequences
+
+When a new version is available for upgrade, including when KOTS checks for upstream updates as well as when the user syncs their license or makes a config change, the KOTS Admin Console assigns a unique instance sequence number to that version. The instance sequence in the Admin Console starts at 0 and increments for each identifier that is returned when a new version is available.
+
+This instance sequence is unrelated to the release sequence dispalyed in the Vendor Portal, and it is likely that the instance sequence will differ from the release sequence. Instance sequences are only tracked by KOTS instances, and the Vendor Portal has no knowledge of these numbers.
+
+The following graphic shows instance sequence numbers on the Admin Console dashboard:
+
+
+
+[View a larger version of this image](/images/instance-sequences.png)
+
+#### Channel Sequences
+
+When a release is promoted to a channel, a channel sequence number is assigned. This unique sequence number increments by one and tracks the order in which releases were promoted to a channel. You can view the channel sequence on the **Release History** page in the Vendor Portal, as shown in the image below:
+
+
+
+[View a larger version of this image](/images/release-history-channel-sequence.png)
+
+The channel sequence is also used in certain URLs. For example, a release with a *release sequence* of `170` can have a *channel sequence* of `125`. The air gap download URL for that release can contain `125` in the URL, even though the release sequence is `170`.
+
+Ordering is more complex if some or all of the releases in a channel have a semantic version label and semantic versioning is enabled for the channel. For more information, see [Semantic Versioning Sequence](#semantic-versioning-sequence).
+
+#### Semantic Versioning Sequence
+
+For channels with semantic versioning enabled, the Admin Console sequences instance releases by their semantic versions instead of their promotion dates.
+
+If releases without a valid semantic version are already promoted to a channel, the Admin Console sorts the releases that do have semantic versions starting with the earliest version and proceeding to the latest. The releases with non-semantic versioning stay in the order of their promotion dates. For example, assume that you promote these releases in the following order to a channel:
+
+- 1.0.0
+- abc
+- 0.1.0
+- xyz
+- 2.0.0
+
+Then, you enable semantic versioning on that channel. The Admin Console sequences the version history for the channel as follows:
+
+- 0.1.0
+- 1.0.0
+- abc
+- xyz
+- 2.0.0
+
+### Semantic Versioning
+
+Semantic versioning is available with the Replicated KOTS v1.58.0 and later. Note the following:
+
+- For applications created in the Vendor Portal on or after February 23, 2022, semantic versioning is enabled by default on the Stable and Beta channels. Semantic versioning is disabled on the Unstable channel by default.
+
+- For existing applications created before February 23, 2022, semantic versioning is disabled by default on all channels.
+
+Semantic versioning is recommended because it makes versioning more predictable for users and lets you enforce versioning so that no one uses an incorrect version.
+
+To use semantic versioning:
+
+1. Enable semantic versioning on a channel, if it is not enabled by default. Click the **Edit channel settings** icon, and turn on the **Enable semantic versioning** toggle.
+1. Assign a semantic version number when you promote a release.
+
+Releases promoted to a channel with semantic versioning enabled are verified to ensure that the release version label is a valid semantic version. For more information about valid semantic versions, see [Semantic Versioning 2.0.0](https://semver.org).
+
+If you enable semantic versioning for a channel and then promote releases to it, Replicated recommends that you do not later disable semantic versioning for that channel.
+
+You can enable semantic versioning on a channel that already has releases promoted to it without semantic versioning. Any subsequently promoted releases must use semantic versioning. In this case, the channel will have releases with and without semantic version numbers. For information about how Replicated organizes these release sequences, see [Semantic Versioning Sequences](#semantic-versioning-sequence).
+
+### Demotion
+
+A channel release can be demoted from a channel. When a channel release is demoted, the release is no longer available for download, but is not withdrawn from environments where it was already downloaded or installed.
+
+The demoted release's channel sequence and version are not reused. For customers, the release will appear to have been skipped. Un-demoting a release will restore its place in the channel sequence making it again available for download and installation.
+
+For information about how to demote a release, see [Demote a Release](/vendor/releases-creating-releases#demote-a-release) in _Managing Releases with the Vendor Portal_.
+
+## Vendor Portal Pages
+
+This section provides information about the channels and releases pages in the Vendor Portal.
+
+### Channels Page
+
+The **Channels** page in the Vendor Portal includes information about each channel. From the **Channels** page, you can edit and archive your channels. You can also edit the properties of the releases promoted to each channel, and view and edit the customers assigned to each channel.
+
+The following shows an example of a channel in the Vendor Portal **Channels** page:
+
+
+
+[View a larger version of this image](/images/channel-card.png)
+
+As shown in the image above, you can do the following from the **Channels** page:
+
+* Edit the channel settings by clicking on the settings icon, or archive the channel by clicking on the trash can icon. For information about channel settings, see [Settings](#settings).
+
+* In the **Adoption rate** section, view data on the adoption rate of releases promoted to the channel among customers assigned to the channel.
+
+* In the **Customers** section, view the number of active and inactive customers assigned to the channel. Click **Details** to go to the **Customers** page, where you can view details about the customers assigned to the channel.
+
+* In the **Latest release** section, view the properties of the latest release, and get information about any warnings or errors in the YAML files for the latest release.
+
+ Click **Release history** to access the history of all releases promoted to the channel. From the **Release History** page, you can view the version labels and files in each release that has been promoted to the selected channel.
+
+ You can also build and download air gap bundles to be used in air gap installations with Replicated installers (Embedded Cluster, KOTS, kURL), edit the release properties for each release promoted to the channel from the **Release History** page, and demote a release from the channel.
+
+ The following shows an example of the **Release History** page:
+
+
+
+ [View a larger version of this image](/images/channel-card.png)
+
+* For applications that support KOTS, you can also do the following from the **Channel** page:
+
+ * In the **kURL installer** section, view the current kURL installer promoted to the channel. Click **Installer history** to view the history of kURL installers promoted to the channel. For more information about creating kURL installers, see [Creating a kURL Installer](packaging-embedded-kubernetes).
+
+ * In the **Install** section, view and copy the installation commands for the latest release on the channel.
+
+### Draft Release Page
+
+For applications that support installation with KOTS, the **Draft** page provides a YAML editor to add, edit, and delete your application files and Replicated custom resources. You click **Releases > Create Release** in the Vendor Portal to open the **Draft** page.
+
+The following shows an example of the **Draft** page in the Vendor Portal:
+
+
+
+ [View a larger version of this image](/images/guides/kots/default-yaml.png)
+
+You can do the following tasks on the **Draft** page:
+
+- In the file directory, manage the file directory structure. Replicated custom resource files are grouped together above the white line of the file directory. Application files are grouped together underneath the white line in the file directory.
+
+ Delete files using the trash icon that displays when you hover over a file. Create a new file or folder using the corresponding icons at the bottom of the file directory pane. You can also drag and drop files in and out of the folders.
+
+ 
+
+- Edit the YAML files by selecting a file in the directory and making changes in the YAML editor.
+
+- In the **Help** or **Config help** pane, view the linter for any errors. If there are no errors, you get an **Everything looks good!** message. If an error displays, you can click the **Learn how to configure** link. For more information, see [Linter Rules](/reference/linter).
+
+- Select the Config custom resource to preview how your application's Config page will look to your customers. The **Config preview** pane only appears when you select that file. For more information, see [About the Configuration Screen](config-screen-about).
+
+- Select the Application custom resource to preview how your application icon will look in the Admin Console. The **Application icon preview** only appears when you select that file. For more information, see [Customizing the Application Icon](admin-console-customize-app-icon).
diff --git a/docs/reference/releases-creating-channels.md b/docs/reference/releases-creating-channels.md
new file mode 100644
index 0000000000..ac1ada784e
--- /dev/null
+++ b/docs/reference/releases-creating-channels.md
@@ -0,0 +1,70 @@
+# Creating and Editing Channels
+
+This topic describes how to create and edit channels using the Replicated Vendor Portal. For more information about channels, see [About Channels and Releases](releases-about).
+
+For information about creating channels with the Replicated CLI, see [channel create](/reference/replicated-cli-channel-create).
+
+For information about creating and managing channels with the Vendor API v3, see the [channels](https://replicated-vendor-api.readme.io/reference/createchannel) section in the Vendor API v3 documentation.
+
+## Create a Channel
+
+To create a channel:
+
+1. From the Replicated [Vendor Portal](https://vendor.replicated.com), select **Channels** from the left menu.
+1. Click **Create Channel**.
+
+ The Create a new channel dialog opens. For example:
+
+
+
+1. Enter a name and description for the channel.
+1. (Recommended) Enable semantic versioning on the channel if it is not enabled by default by turning on **Enable semantic versioning**. For more information about semantic versioning and defaults, see [Semantic Versioning](releases-about#semantic-versioning).
+
+1. (Recommended) Enable an air gap bundle format that supports image digests and deduplication of image layers, by turning on **Enable new air gap bundle format**. For more information, see [Using Image Tags and Digests](private-images-tags-digests).
+
+1. Click **Create Channel**.
+
+## Edit a Channel
+
+To edit the settings of an existing channel:
+
+1. In the Vendor Portal, select **Channels** from the left menu.
+1. Click the gear icon on the top right of the channel that you want to modify.
+
+ The Channel settings dialog opens. For example:
+
+
+
+1. Edit the fields and click **Save**.
+
+ For more information about channel settings, see [Settings](releases-about#settings) in _About Channels and Releases_.
+
+## Archive a Channel
+
+You can archive an existing channel to prevent any new releases from being promoted to the channel.
+
+:::note
+You cannot archive a channel if:
+* There are customers assigned to the channel.
+* The channel is set as the default channel.
+
+Assign customers to a different channel and set a different channel as the default before archiving.
+:::
+
+To archive a channel with the Vendor Portal or the Replicated CLI:
+
+* **Vendor portal**: In the Vendor Portal, go to the **Channels** page and click the trash can icon in the top right corner of the card for the channel that you want to archive.
+* **Replicated CLI**:
+ 1. Run the following command to find the ID for the channel that you want to archive:
+ ```
+ replicated channel ls
+ ```
+ The output of this command includes the ID and name for each channel, as well as information about the latest release version on the channels.
+
+ 1. Run the following command to archive the channel:
+ ```
+ replicated channel rm CHANNEL_ID
+ ```
+ Replace `CHANNEL_ID` with the channel ID that you retrieved in the previous step.
+
+ For more information, see [channel rm](/reference/replicated-cli-channel-rm) in the Replicated CLI documentation.
diff --git a/docs/reference/releases-creating-cli.mdx b/docs/reference/releases-creating-cli.mdx
new file mode 100644
index 0000000000..3bd7499eaf
--- /dev/null
+++ b/docs/reference/releases-creating-cli.mdx
@@ -0,0 +1,113 @@
+# Managing Releases with the CLI
+
+This topic describes how to use the Replicated CLI to create and promote releases.
+
+For information about creating and managing releases with the Vendor Portal, see [Managing Releases with the Vendor Portal](/vendor/releases-creating-releases).
+
+For information about creating and managing releases with the Vendor API v3, see the [releases](https://replicated-vendor-api.readme.io/reference/createrelease) section in the Vendor API v3 documentation.
+
+## Prerequisites
+
+Before you create a release using the Replicated CLI, complete the following prerequisites:
+
+* Install the Replicated CLI and then log in to authorize the CLI. See [Installing the Replicated CLI](/reference/replicated-cli-installing).
+
+* Create a new application using the `replicated app create APP_NAME` command. You only need to do this procedure one time for each application that you want to deploy. See [`app create`](/reference/replicated-cli-app-create) in _Reference_.
+
+* Set the `REPLICATED_APP` environment variable to the slug of the target application. See [Set Environment Variables](/reference/replicated-cli-installing#env-var) in _Installing the Replicated CLI_.
+
+ **Example**:
+
+ ```bash
+ export REPLICATED_APP=my-app-slug
+ ```
+
+## Create a Release From a Local Directory {#dir}
+
+You can use the Replicated CLI to create a release from a local directory that contains the release files.
+
+To create and promote a release:
+
+1. (Helm Charts Only) If your release contains any Helm charts:
+
+ 1. Package each Helm chart as a `.tgz` file. See [Packaging a Helm Chart for a Release](/vendor/helm-install-release).
+
+ 1. Move the `.tgz` file or files to the local directory that contains the release files:
+
+ ```bash
+ mv CHART_TGZ PATH_TO_RELEASE_DIR
+ ```
+ Where:
+ * `CHART_TGZ` is the `.tgz` Helm chart archive.
+ * `PATH_TO_RELEASE_DIR` is path to the directory that contains the release files.
+
+ **Example**
+
+ ```bash
+ mv wordpress-1.3.5.tgz manifests
+ ```
+
+ 1. In the same directory that contains the release files, add a HelmChart custom resource for each Helm chart in the release. See [Configuring the HelmChart Custom Resource](helm-native-v2-using).
+
+1. Lint the application manifest files and ensure that there are no errors in the YAML:
+
+ ```bash
+ replicated release lint --yaml-dir=PATH_TO_RELEASE_DIR
+ ```
+
+ Where `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
+
+ For more information, see [release lint](/reference/replicated-cli-release-lint) and [Linter Rules](/reference/linter).
+
+1. Do one of the following:
+
+ * **Create and promote the release with one command**:
+
+ ```bash
+ replicated release create --yaml-dir PATH_TO_RELEASE_DIR --lint --promote CHANNEL
+ ```
+ Where:
+ * `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
+ * `CHANNEL` is the channel ID or the case sensitive name of the channel.
+
+ * **Create and edit the release before promoting**:
+
+ 1. Create the release:
+
+ ```bash
+ replicated release create --yaml-dir PATH_TO_RELEASE_DIR
+ ```
+ Where `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
+
+ For more information, see [release create](/reference/replicated-cli-release-create).
+
+ 1. Edit and update the release as desired:
+
+ ```
+ replicated release update SEQUENCE --yaml-dir PATH_TO_RELEASE_DIR
+ ```
+ Where:
+
+ - `SEQUENCE` is the release sequence number. This identifies the existing release to be updated.
+ - `PATH_TO_RELEASE_DIR` is the path to the directory with the release files.
+
+ For more information, see [release update](/reference/replicated-cli-release-update).
+
+ 1. Promote the release when you are ready to test it. Releases cannot be edited after they are promoted. To make changes after promotion, create a new release.
+
+ ```
+ replicated release promote SEQUENCE CHANNEL
+ ```
+
+ Where:
+
+ - `SEQUENCE` is the release sequence number.
+ - `CHANNEL` is the channel ID or the case sensitive name of the channel.
+
+ For more information, see [release promote](/reference/replicated-cli-release-promote).
+
+1. Verify that the release was promoted to the target channel:
+
+ ```
+ replicated release ls
+ ```
\ No newline at end of file
diff --git a/docs/reference/releases-creating-customer.mdx b/docs/reference/releases-creating-customer.mdx
new file mode 100644
index 0000000000..15d519192b
--- /dev/null
+++ b/docs/reference/releases-creating-customer.mdx
@@ -0,0 +1,114 @@
+import ChangeChannel from "../partials/customers/_change-channel.mdx"
+import Download from "../partials/customers/_download.mdx"
+import GitOpsNotRecommended from "../partials/gitops/_gitops-not-recommended.mdx"
+
+# Creating and Managing Customers
+
+This topic describes how to create and manage customers in the Replicated Vendor Portal. For more information about customer licenses, see [About Customers](licenses-about).
+
+## Create a Customer
+
+This procedure describes how to create a new customer in the Vendor Portal. You can edit customer details at any time.
+
+For information about creating a customer with the Replicated CLI, see [customer create](/reference/replicated-cli-customer-create).
+
+For information about creating and managing customers with the Vendor API v3, see the [customers](https://replicated-vendor-api.readme.io/reference/getcustomerentitlements) section in the Vendor API v3 documentation.
+
+To create a customer:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
+
+ The **Create a new customer** page opens:
+
+ 
+
+ [View a larger version of this image](/images/create-customer.png)
+
+1. For **Customer name**, enter a name for the customer.
+
+1. For **Customer email**, enter the email address for the customer.
+
+ :::note
+ A customer email address is required for Helm installations. This email address is never used to send emails to customers.
+ :::
+
+1. For **Assigned channel**, assign the customer to one of your channels. You can select any channel that has at least one release. The channel a customer is assigned to determines the application releases that they can install. For more information, see [Channel Assignment](licenses-about#channel-assignment) in _About Customers_.
+
+ :::note
+
+
+[View a larger version of this image](/images/customers-filter.png)
+
+You can filter customers based on whether they are active, by license type, and by channel name. You can filter using more than one criteria, such as Active, Paid, and Stable. However, you can select only one license type and one channel at a time.
+
+If there is adoption rate data available for the channel that you are filtering by, you can also filter by current version, previous version, and older versions.
+
+You can also filter customers by custom ID or email address. To filter customers by custom ID or email, use the search box and prepend your search term with "customId:" (ex: `customId:1234`) or "email:" (ex: `email:bob@replicated.com`).
+
+If you want to filter information using multiple license types or channels, you can download a CSV file instead. For more information, see [Export Customer and Instance Data](#export) above.
diff --git a/docs/reference/releases-creating-releases.mdx b/docs/reference/releases-creating-releases.mdx
new file mode 100644
index 0000000000..e7786da468
--- /dev/null
+++ b/docs/reference/releases-creating-releases.mdx
@@ -0,0 +1,127 @@
+import RequiredReleasesLimitations from "../partials/releases/_required-releases-limitations.mdx"
+import RequiredReleasesDescription from "../partials/releases/_required-releases-description.mdx"
+
+# Managing Releases with the Vendor Portal
+
+This topic describes how to use the Replicated Vendor Portal to create and promote releases, edit releases, edit release properties, and archive releases.
+
+For information about creating and managing releases with the CLI, see [Managing Releases with the CLI](/vendor/releases-creating-cli).
+
+For information about creating and managing releases with the Vendor API v3, see the [releases](https://replicated-vendor-api.readme.io/reference/createrelease) and [channelReleases](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbundleurl) sections in the Vendor API v3 documentation.
+
+## Create a Release
+
+To create and promote a release in the Vendor Portal:
+
+1. From the **Applications** dropdown list, select **Create an app** or select an existing application to update.
+
+1. Click **Releases > Create release**.
+
+ 
+
+ [View a larger version of this image](/images/release-create-new.png)
+
+1. Add your files to the release. You can do this by dragging and dropping files to the file directory in the YAML editor or clicking the plus icon to add a new, untitled YAML file.
+
+1. For any Helm charts that you add to the release, in the **Select Installation Method** dialog, select the version of the HelmChart custom resource that KOTS will use to install the chart. kots.io/v1beta2 is recommended. For more information about the HelmChart custom resource, see [Configuring the HelmChart Custom Resource](helm-native-v2-using).
+
+
+
+ [View a larger version of this image](/images/helm-select-install-method.png)
+
+1. Click **Save release**. This saves a draft that you can continue to edit until you promote it.
+
+1. Click **Promote**. In the **Promote Release** dialog, edit the fields:
+
+ For more information about the requirements and limitations of each field, see Properties in _About Channels and Releases_.
+
+ | Field | +Description | +
|---|---|
| Channel | +
+ Select the channel where you want to promote the release. If you are not sure which channel to use, use the default Unstable channel. + |
+
| Version label | +
+ Enter a version label. +If you have one or more Helm charts in your release, the Vendor Portal automatically populates this field. You can change the version label to any |
+
| Requirements | ++ Select the Prevent this release from being skipped during upgrades to mark the release as required for KOTS installations. This option does not apply to installations with Helm. + | +
| Release notes | +Add release notes. The release notes support markdown and are shown to your customer. | +
+
+ [View a larger image](/images/releases-edit-draft.png)
+
+1. Click **Save** to save your updated draft.
+1. (Optional) Click **Promote**.
+
+## Edit Release Properties
+
+You can edit the properties of a release at any time. For more information about release properties, see [Properties](releases-about#properties) in _About Channels and Releases_.
+
+To edit release properties:
+
+1. Go to **Channels**.
+1. In the channel where the release was promoted, click **Release History**.
+1. For the release sequence that you want to edit, open the dot menu and click **Edit release**.
+1. Edit the properties as needed.
+
+
+ [View a larger image](/images/release-properties.png)
+1. Click **Update Release**.
+
+## Archive a Release
+
+You can archive releases to remove them from view on the **Releases** page. Archiving a release that has been promoted does _not_ remove the release from the channel's **Release History** page or prevent KOTS from downloading the archived release.
+
+To archive one or more releases:
+
+1. From the **Releases** page, click the trash can icon in the upper right corner.
+1. Select one or more releases.
+1. Click **Archive Releases**.
+1. Confirm the archive action when prompted.
+
+## Demote a Release
+
+A channel release can be demoted from a channel. When a channel release is demoted, the release is no longer available for download, but is not withdrawn from environments where it was already downloaded or installed. For more information, see [Demotion](/vendor/releases-about#demotion) in _About Channels and Releases_.
+
+For information about demoting and un-demoting releases with the Replicated CLI, see [channel demote](/reference/replicated-cli-channel-demote) and [channel un-demote](/reference/replicated-cli-channel-un-demote).
+
+To demote a release in the Vendor Portal:
+
+1. Go to **Channels**.
+1. In the channel where the release was promoted, click **Release History**.
+1. For the release sequence that you want to demote, open the dot menu and select **Demote Release**.
+
+ 
+ [View a larger version of this image](/images/channels-release-history.png)
+
+ After the release is demoted, the given release sequence is greyed out and a **Demoted** label is displayed next to the release on the **Release History** page.
\ No newline at end of file
diff --git a/docs/reference/releases-share-download-portal.md b/docs/reference/releases-share-download-portal.md
new file mode 100644
index 0000000000..58d5a0fb21
--- /dev/null
+++ b/docs/reference/releases-share-download-portal.md
@@ -0,0 +1,85 @@
+import DownloadPortal from "../partials/kots/_download-portal-about.mdx"
+
+# Downloading Assets from the Download Portal
+
+This topic describes how to download customer license files, air gap bundles, and other assets from the Replicated Download Portal.
+
+For information about downloading air gap bundles and licenses with the Vendor API v3, see the following pages in the Vendor API v3 documentation:
+* [Download a customer license file as YAML](https://replicated-vendor-api.readme.io/reference/downloadlicense)
+* [Trigger airgap build for a channel's release](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbuild)
+* [Get airgap bundle download URL for the active release on the channel](https://replicated-vendor-api.readme.io/reference/channelreleaseairgapbundleurl)
+
+## Overview
+
+
+
+ [View a larger version of this image](/images/download-portal-password-popup.png)
+
+1. Click **Copy** to copy the password to your clipboard.
+
+ After the password is saved, it cannot be retrieved again. If you lose the password, you can generate a new one.
+
+1. Click **Save** to set the password.
+
+1. Click **Visit download portal** to log in to the Download Portal
+and preview your customer's experience.
+
+ :::note
+ By default, the Download Portal uses the domain `get.replicated.com`. You can optionally use a custom domain for the Download Portal. For more information, see [Using Custom Domains](/vendor/custom-domains-using).
+ :::
+
+1. In the Download Portal, on the left side of the screen, select one of the following:
+ * **Bring my own Kubernetes**: View the downloadable assets for existing cluster installations with KOTS.
+ * **Embedded Kubernetes**: View the downloadable assets for Replicated kURL installations.
+
+ :::note
+ Installation assets for [Replicated Embedded Cluster](/vendor/embedded-overview) are not available for download in the Download Portal.
+ :::
+
+ The following is an example of the Download Portal for an air gap customer:
+
+ 
+
+ [View a larger version of this image](/images/download-portal-existing-cluster.png)
+
+1. Under **Select application version**, use the dropdown to select the target application release version. The Download Portal automatically makes the correct air gap bundles available for download based on the selected application version.
+
+1. Click the download button to download each asset.
+
+1. To share installation files with a customer, send the customer their unique link and password for the Download Portal.
diff --git a/docs/reference/releases-sharing-license-install-script.mdx b/docs/reference/releases-sharing-license-install-script.mdx
new file mode 100644
index 0000000000..aff2433306
--- /dev/null
+++ b/docs/reference/releases-sharing-license-install-script.mdx
@@ -0,0 +1,136 @@
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+# Finding Installation Commands for a Release
+
+This topic describes where to find the installation commands and instructions for releases in the Replicated Vendor Portal.
+
+For information about getting installation commands with the Replicated CLI, see [channel inspect](/reference/replicated-cli-channel-inspect). For information about getting installation commands with the Vendor API v3, see [Get install commands for a specific channel release](https://replicated-vendor-api.readme.io/reference/getchannelreleaseinstallcommands) in the Vendor API v3 documentation.
+
+## Get Commands for the Latest Release
+
+Every channel in the Vendor Portal has an **Install** section where you can find installation commands for the latest release on the channel.
+
+To get the installation commands for the latest release:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), go to the **Channels** page.
+
+1. On the target channel card, under **Install**, click the tab for the type of installation command that you want to view:
+
+ View the command for installing with Replicated KOTS in existing clusters.
+ +
+ [View a larger version of this image](/images/channel-card-install-kots.png)
+ View the commands for installing with Replicated Embedded Cluster or Replicated kURL on VMs or bare metal servers.
+ +In the dropdown, choose **kURL** or **Embedded Cluster** to view the command for the target installer:
+ +
+ [View a larger version of this image](/images/channel-card-install-kurl.png)
+
+
+ [View a larger version of this image](/images/channel-card-install-ec.png)
+
+ :::note
+ The Embedded Cluster installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
+ :::
+ View the command for installing with the Helm CLI in an existing cluster.
+ +
+ [View a larger version of this image](/images/channel-card-install-helm.png)
+
+ :::note
+ The Helm installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
+ :::
+
+
+ [View a larger version of this image](/images/release-history-link.png)
+
+1. For the target release version, open the dot menu and click **Install Commands**.
+
+ 
+
+ [View a larger version of this image](/images/channels-release-history.png)
+
+1. In the **Install Commands** dialog, click the tab for the type of installation command that you want to view:
+
+ View the command for installing with Replicated KOTS in existing clusters.
+ +
+ [View a larger version of this image](/images/release-history-install-kots.png)
+ View the commands for installing with Replicated Embedded Cluster or Replicated kURL on VMs or bare metal servers.
+ +In the dropdown, choose **kURL** or **Embedded Cluster** to view the command for the target installer:
+ +
+ [View a larger version of this image](/images/release-history-install-kurl.png)
+
+
+ [View a larger version of this image](/images/release-history-install-embedded-cluster.png)
+
+ :::note
+ The Embedded Cluster installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
+ :::
+ View the command for installing with the Helm CLI in an existing cluster.
+ +
+ [View a larger version of this image](/images/release-history-install-helm.png)
+
+ :::note
+ The Helm installation instructions are customer-specific. Click **View customer list** to navigate to the page for the target customer. For more information, see [Get Customer-Specific Installation Instructions for Helm or Embedded Cluster](#customer-specific) below.
+ :::
+ View the customer-specific Helm CLI installation instructions. For more information about installing with the Helm CLI, see [Installing with Helm](/vendor/install-with-helm).
+
+ [View a larger version of this image](/images/helm-install-instructions-dialog.png)
+ View the customer-specific Embedded Cluster installation instructions. For more information about installing with Embedded Cluster, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded).
+
+ [View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
+
+
+ [View a larger version of this image](/images/service-accounts.png)
+
+ 1. For **Nickname**, enter a name the token. Names for service accounts must be unique within a given team.
+
+ 1. For **RBAC**, select the RBAC policy from the dropdown list. The token must have `Admin` access to create new releases.
+
+ This list includes the Vendor Portal default policies `Admin` and `Read Only`. Any custom policies also display in this list. For more information, see [Configuring RBAC Policies](team-management-rbac-configuring).
+
+ Users with a non-admin RBAC role cannot select any other RBAC role when creating a token. They are restricted to creating a token with their same level of access to avoid permission elevation.
+
+ 1. (Optional) For custom RBAC policies, select the **Limit to read-only version of above policy** check box to if you want use a policy that has Read/Write permissions but limit this service account to read-only. This option lets you maintain one version of a custom RBAC policy and use it two ways: as read/write and as read-only.
+
+1. Select **Create Service Account**.
+
+1. Copy the service account token and save it in a secure location. The token will not be available to view again.
+
+ :::note
+ To remove a service account, select **Remove** for the service account that you want to delete.
+ :::
+
+### Generate a User API Token
+
+To generate a user API token:
+
+1. Log in to the Vendor Portal and go to the [Account Settings](https://vendor.replicated.com/account-settings) page.
+1. Under **User API Tokens**, select **Create a user API token**. If one or more tokens already exist, you can add another by selecting **New user API token**.
+
+
+
+ [View a larger version of this image](/images/user-token-list.png)
+
+1. In the **New user API token** dialog, enter a name for the token in the **Nickname** field. Names for user API tokens must be unique per user.
+
+
+
+ [View a larger version of this image](/images/user-token-create.png)
+
+1. Select the required permissions or use the default **Read and Write** permissions. Then select **Create token**.
+
+ :::note
+ The token must have `Read and Write` access to create new releases.
+ :::
+
+1. Copy the user API token that displays and save it in a secure location. The token will not be available to view again.
+
+ :::note
+ To revoke a token, select **Revoke token** for the token that you want to delete.
+ :::
diff --git a/docs/reference/replicated-cli-api-get.mdx b/docs/reference/replicated-cli-api-get.mdx
deleted file mode 100644
index 37c209516c..0000000000
--- a/docs/reference/replicated-cli-api-get.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
-# replicated api get
-
-Make ad-hoc GET API calls to the Replicated API
-
-### Synopsis
-
-This is essentially like curl for the Replicated API, but
-uses your local credentials and prints the response unmodified.
-
-We recommend piping the output to jq for easier reading.
-
-Pass the PATH of the request as the final argument. Do not include the host or version.
-
-```
-replicated api get [flags]
-```
-
-### Examples
-
-```
-replicated api get /v3/apps
-```
-
-### Options
-
-```
- -h, --help help for get
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated api](replicated-cli-api) - Make ad-hoc API calls to the Replicated API
-
diff --git a/docs/reference/replicated-cli-api-patch.mdx b/docs/reference/replicated-cli-api-patch.mdx
deleted file mode 100644
index 95abc49058..0000000000
--- a/docs/reference/replicated-cli-api-patch.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
-# replicated api patch
-
-Make ad-hoc PATCH API calls to the Replicated API
-
-### Synopsis
-
-This is essentially like curl for the Replicated API, but
-uses your local credentials and prints the response unmodified.
-
-We recommend piping the output to jq for easier reading.
-
-Pass the PATH of the request as the final argument. Do not include the host or version.
-
-```
-replicated api patch [flags]
-```
-
-### Examples
-
-```
-replicated api patch /v3/customer/2VffY549paATVfHSGpJhjh6Ehpy -b '{"name":"Valuable Customer"}'
-```
-
-### Options
-
-```
- -b, --body string JSON body to send with the request
- -h, --help help for patch
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated api](replicated-cli-api) - Make ad-hoc API calls to the Replicated API
-
diff --git a/docs/reference/replicated-cli-api-post.mdx b/docs/reference/replicated-cli-api-post.mdx
deleted file mode 100644
index 97d526b15b..0000000000
--- a/docs/reference/replicated-cli-api-post.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
-# replicated api post
-
-Make ad-hoc POST API calls to the Replicated API
-
-### Synopsis
-
-This is essentially like curl for the Replicated API, but
-uses your local credentials and prints the response unmodified.
-
-We recommend piping the output to jq for easier reading.
-
-Pass the PATH of the request as the final argument. Do not include the host or version.
-
-```
-replicated api post [flags]
-```
-
-### Examples
-
-```
-replicated api post /v3/app/2EuFxKLDxKjPNk2jxMTmF6Vxvxu/channel -b '{"name":"marc-waz-here"}'
-```
-
-### Options
-
-```
- -b, --body string JSON body to send with the request
- -h, --help help for post
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated api](replicated-cli-api) - Make ad-hoc API calls to the Replicated API
-
diff --git a/docs/reference/replicated-cli-api-put.mdx b/docs/reference/replicated-cli-api-put.mdx
deleted file mode 100644
index ef66438d7e..0000000000
--- a/docs/reference/replicated-cli-api-put.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
-# replicated api put
-
-Make ad-hoc PUT API calls to the Replicated API
-
-### Synopsis
-
-This is essentially like curl for the Replicated API, but
-uses your local credentials and prints the response unmodified.
-
-We recommend piping the output to jq for easier reading.
-
-Pass the PATH of the request as the final argument. Do not include the host or version.
-
-```
-replicated api put [flags]
-```
-
-### Examples
-
-```
-replicated api put /v3/app/2EuFxKLDxKjPNk2jxMTmF6Vxvxu/channel/2QLPm10JPkta7jO3Z3Mk4aXTPyZ -b '{"name":"marc-waz-here2"}'
-```
-
-### Options
-
-```
- -b, --body string JSON body to send with the request
- -h, --help help for put
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated api](replicated-cli-api) - Make ad-hoc API calls to the Replicated API
-
diff --git a/docs/reference/replicated-cli-api.mdx b/docs/reference/replicated-cli-api.mdx
deleted file mode 100644
index 39391062c4..0000000000
--- a/docs/reference/replicated-cli-api.mdx
+++ /dev/null
@@ -1,25 +0,0 @@
-# replicated api
-
-Make ad-hoc API calls to the Replicated API
-
-### Options
-
-```
- -h, --help help for api
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated](replicated) - Manage your Commercial Software Distribution Lifecycle using Replicated
-* [replicated api get](replicated-cli-api-get) - Make ad-hoc GET API calls to the Replicated API
-* [replicated api patch](replicated-cli-api-patch) - Make ad-hoc PATCH API calls to the Replicated API
-* [replicated api post](replicated-cli-api-post) - Make ad-hoc POST API calls to the Replicated API
-* [replicated api put](replicated-cli-api-put) - Make ad-hoc PUT API calls to the Replicated API
-
diff --git a/docs/reference/replicated-cli-app-create.mdx b/docs/reference/replicated-cli-app-create.mdx
deleted file mode 100644
index 82e0b6c5cc..0000000000
--- a/docs/reference/replicated-cli-app-create.mdx
+++ /dev/null
@@ -1,49 +0,0 @@
-# replicated app create
-
-Create a new application
-
-### Synopsis
-
-Create a new application in your Replicated account.
-
-This command allows you to initialize a new application that can be distributed
-and managed using the KOTS platform. When you create a new app, it will be set up
-with default configurations, which you can later customize.
-
-The NAME argument is required and will be used as the application's name.
-
-```
-replicated app create NAME [flags]
-```
-
-### Examples
-
-```
-# Create a new app named "My App"
-replicated app create "My App"
-
-# Create a new app and output the result in JSON format
-replicated app create "Another App" --output json
-
-# Create a new app with a specific name and view details in table format
-replicated app create "Custom App" --output table
-```
-
-### Options
-
-```
- -h, --help help for create
- --output string The output format to use. One of: json|table (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated app](replicated-cli-app) - Manage applications
-
diff --git a/docs/reference/replicated-cli-app-ls.mdx b/docs/reference/replicated-cli-app-ls.mdx
deleted file mode 100644
index 09688a8cda..0000000000
--- a/docs/reference/replicated-cli-app-ls.mdx
+++ /dev/null
@@ -1,60 +0,0 @@
-# replicated app ls
-
-List applications
-
-### Synopsis
-
-List all applications in your Replicated account,
-or search for a specific application by name or ID.
-
-This command displays information about your applications, including their
-names, IDs, and associated channels. If a NAME argument is provided, it will
-filter the results to show only applications that match the given name or ID.
-
-The output can be customized using the --output flag to display results in
-either table or JSON format.
-
-```
-replicated app ls [NAME] [flags]
-```
-
-### Aliases
-
-```
-ls, list
-```
-
-### Examples
-
-```
-# List all applications
-replicated app ls
-
-# Search for a specific application by name
-replicated app ls "My App"
-
-# List applications and output in JSON format
-replicated app ls --output json
-
-# Search for an application and display results in table format
-replicated app ls "App Name" --output table
-```
-
-### Options
-
-```
- -h, --help help for ls
- --output string The output format to use. One of: json|table (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated app](replicated-cli-app) - Manage applications
-
diff --git a/docs/reference/replicated-cli-app-rm.mdx b/docs/reference/replicated-cli-app-rm.mdx
deleted file mode 100644
index 54ff4aa625..0000000000
--- a/docs/reference/replicated-cli-app-rm.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
-# replicated app rm
-
-Delete an application
-
-### Synopsis
-
-Delete an application from your Replicated account.
-
-This command allows you to permanently remove an application from your account.
-Once deleted, the application and all associated data will be irretrievably lost.
-
-Use this command with caution as there is no way to undo this operation.
-
-```
-replicated app rm NAME [flags]
-```
-
-### Aliases
-
-```
-rm, delete
-```
-
-### Examples
-
-```
-# Delete a app named "My App"
-replicated app delete "My App"
-
-# Delete an app and skip the confirmation prompt
-replicated app delete "Another App" --force
-
-# Delete an app and output the result in JSON format
-replicated app delete "Custom App" --output json
-```
-
-### Options
-
-```
- -f, --force Skip confirmation prompt. There is no undo for this action.
- -h, --help help for rm
- --output string The output format to use. One of: json|table (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated app](replicated-cli-app) - Manage applications
-
diff --git a/docs/reference/replicated-cli-app.mdx b/docs/reference/replicated-cli-app.mdx
deleted file mode 100644
index 515f3edebf..0000000000
--- a/docs/reference/replicated-cli-app.mdx
+++ /dev/null
@@ -1,61 +0,0 @@
-# replicated app
-
-Manage applications
-
-### Synopsis
-
-The app command allows you to manage applications in your Replicated account.
-
-This command provides a suite of subcommands for creating, listing, updating, and
-deleting applications. You can perform operations such as creating new apps,
-viewing app details, modifying app settings, and removing apps from your account.
-
-Use the various subcommands to:
-- Create new applications
-- List all existing applications
-- View details of a specific application
-- Update application settings
-- Delete applications from your account
-
-### Examples
-
-```
-# List all applications
-replicated app ls
-
-# Create a new application
-replicated app create "My New App"
-
-# View details of a specific application
-replicated app inspect "My App Name"
-
-# Delete an application
-replicated app delete "App to Remove"
-
-# Update an application's settings
-replicated app update "My App" --channel stable
-
-# List applications with custom output format
-replicated app ls --output json
-```
-
-### Options
-
-```
- -h, --help help for app
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated](replicated) - Manage your Commercial Software Distribution Lifecycle using Replicated
-* [replicated app create](replicated-cli-app-create) - Create a new application
-* [replicated app ls](replicated-cli-app-ls) - List applications
-* [replicated app rm](replicated-cli-app-rm) - Delete an application
-
diff --git a/docs/reference/replicated-cli-channel-create.mdx b/docs/reference/replicated-cli-channel-create.mdx
deleted file mode 100644
index 5b8f51f829..0000000000
--- a/docs/reference/replicated-cli-channel-create.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
-# replicated channel create
-
-Create a new channel in your app
-
-### Synopsis
-
-Create a new channel in your app and print the channel on success.
-
-```
-replicated channel create [flags]
-```
-
-### Examples
-
-```
-replicated channel create --name Beta --description 'New features subject to change'
-```
-
-### Options
-
-```
- --description string A longer description of this channel
- -h, --help help for create
- --name string The name of this channel
- --output string The output format to use. One of: json|table (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel-demote.mdx b/docs/reference/replicated-cli-channel-demote.mdx
deleted file mode 100644
index 1160fb7368..0000000000
--- a/docs/reference/replicated-cli-channel-demote.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
-# replicated channel demote
-
-Demote a release from a channel
-
-### Synopsis
-
-Demote a channel release from a channel using a channel sequence or release sequence.
-
-```
-replicated channel demote CHANNEL_ID_OR_NAME [flags]
-```
-
-### Examples
-
-```
- # Demote a release from a channel by channel sequence
- replicated channel release demote Beta --channel-sequence 15
-
- # Demote a release from a channel by release sequence
- replicated channel release demote Beta --release-sequence 12
-```
-
-### Options
-
-```
- --channel-sequence int The channel sequence to demote
- -h, --help help for demote
- --release-sequence int The release sequence to demote
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel-disable-semantic-versioning.mdx b/docs/reference/replicated-cli-channel-disable-semantic-versioning.mdx
deleted file mode 100644
index 5e8b8f711f..0000000000
--- a/docs/reference/replicated-cli-channel-disable-semantic-versioning.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
-# replicated channel disable-semantic-versioning
-
-Disable semantic versioning for CHANNEL_ID
-
-### Synopsis
-
-Disable semantic versioning for the CHANNEL_ID.
-
-```
-replicated channel disable-semantic-versioning CHANNEL_ID [flags]
-```
-
-### Examples
-
-```
-replicated channel disable-semantic-versioning CHANNEL_ID
-```
-
-### Options
-
-```
- -h, --help help for disable-semantic-versioning
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel-enable-semantic-versioning.mdx b/docs/reference/replicated-cli-channel-enable-semantic-versioning.mdx
deleted file mode 100644
index c615e7962b..0000000000
--- a/docs/reference/replicated-cli-channel-enable-semantic-versioning.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
-# replicated channel enable-semantic-versioning
-
-Enable semantic versioning for CHANNEL_ID
-
-### Synopsis
-
-Enable semantic versioning for the CHANNEL_ID.
-
-```
-replicated channel enable-semantic-versioning CHANNEL_ID [flags]
-```
-
-### Examples
-
-```
-replicated channel enable-semantic-versioning CHANNEL_ID
-```
-
-### Options
-
-```
- -h, --help help for enable-semantic-versioning
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel-inspect.mdx b/docs/reference/replicated-cli-channel-inspect.mdx
deleted file mode 100644
index 545210c413..0000000000
--- a/docs/reference/replicated-cli-channel-inspect.mdx
+++ /dev/null
@@ -1,30 +0,0 @@
-# replicated channel inspect
-
-Show full details for a channel
-
-### Synopsis
-
-Show full details for a channel
-
-```
-replicated channel inspect CHANNEL_ID [flags]
-```
-
-### Options
-
-```
- -h, --help help for inspect
- --output string The output format to use. One of: json|table (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel-ls.mdx b/docs/reference/replicated-cli-channel-ls.mdx
deleted file mode 100644
index 158cc21f32..0000000000
--- a/docs/reference/replicated-cli-channel-ls.mdx
+++ /dev/null
@@ -1,36 +0,0 @@
-# replicated channel ls
-
-List all channels in your app
-
-### Synopsis
-
-List all channels in your app
-
-```
-replicated channel ls [flags]
-```
-
-### Aliases
-
-```
-ls, list
-```
-
-### Options
-
-```
- -h, --help help for ls
- --output string The output format to use. One of: json|table (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel-rm.mdx b/docs/reference/replicated-cli-channel-rm.mdx
deleted file mode 100644
index 0ddabfc3ca..0000000000
--- a/docs/reference/replicated-cli-channel-rm.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
-# replicated channel rm
-
-Remove (archive) a channel
-
-### Synopsis
-
-Remove (archive) a channel
-
-```
-replicated channel rm CHANNEL_ID [flags]
-```
-
-### Aliases
-
-```
-rm, delete
-```
-
-### Options
-
-```
- -h, --help help for rm
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel-un-demote.mdx b/docs/reference/replicated-cli-channel-un-demote.mdx
deleted file mode 100644
index 8f01ac1488..0000000000
--- a/docs/reference/replicated-cli-channel-un-demote.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
-# replicated channel un-demote
-
-Un-demote a release from a channel
-
-### Synopsis
-
-Un-demote a channel release from a channel using a channel sequence or release sequence.
-
-```
-replicated channel un-demote CHANNEL_ID_OR_NAME [flags]
-```
-
-### Examples
-
-```
- # Un-demote a release from a channel by channel sequence
- replicated channel release un-demote Beta --channel-sequence 15
-
- # Un-demote a release from a channel by release sequence
- replicated channel release un-demote Beta --release-sequence 12
-```
-
-### Options
-
-```
- --channel-sequence int The channel sequence to un-demote
- -h, --help help for un-demote
- --release-sequence int The release sequence to un-demote
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated channel](replicated-cli-channel) - List channels
-
diff --git a/docs/reference/replicated-cli-channel.mdx b/docs/reference/replicated-cli-channel.mdx
deleted file mode 100644
index 578959147d..0000000000
--- a/docs/reference/replicated-cli-channel.mdx
+++ /dev/null
@@ -1,33 +0,0 @@
-# replicated channel
-
-List channels
-
-### Synopsis
-
-List channels
-
-### Options
-
-```
- -h, --help help for channel
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated](replicated) - Manage your Commercial Software Distribution Lifecycle using Replicated
-* [replicated channel create](replicated-cli-channel-create) - Create a new channel in your app
-* [replicated channel demote](replicated-cli-channel-demote) - Demote a release from a channel
-* [replicated channel disable-semantic-versioning](replicated-cli-channel-disable-semantic-versioning) - Disable semantic versioning for CHANNEL_ID
-* [replicated channel enable-semantic-versioning](replicated-cli-channel-enable-semantic-versioning) - Enable semantic versioning for CHANNEL_ID
-* [replicated channel inspect](replicated-cli-channel-inspect) - Show full details for a channel
-* [replicated channel ls](replicated-cli-channel-ls) - List all channels in your app
-* [replicated channel rm](replicated-cli-channel-rm) - Remove (archive) a channel
-* [replicated channel un-demote](replicated-cli-channel-un-demote) - Un-demote a release from a channel
-
diff --git a/docs/reference/replicated-cli-cluster-addon-create-object-store.mdx b/docs/reference/replicated-cli-cluster-addon-create-object-store.mdx
deleted file mode 100644
index 0aa6963e20..0000000000
--- a/docs/reference/replicated-cli-cluster-addon-create-object-store.mdx
+++ /dev/null
@@ -1,52 +0,0 @@
-# replicated cluster addon create object-store
-
-Create an object store bucket for a cluster.
-
-### Synopsis
-
-Creates an object store bucket for a cluster, requiring a bucket name prefix. The bucket name will be auto-generated using the format "[BUCKET_PREFIX]-[ADDON_ID]-cmx". This feature provisions an object storage bucket that can be used for storage in your cluster environment.
-
-```
-replicated cluster addon create object-store CLUSTER_ID --bucket-prefix BUCKET_PREFIX [flags]
-```
-
-### Examples
-
-```
-# Create an object store bucket with a specified prefix
-replicated cluster addon create object-store 05929b24 --bucket-prefix mybucket
-
-# Create an object store bucket and wait for it to be ready (up to 5 minutes)
-replicated cluster addon create object-store 05929b24 --bucket-prefix mybucket --wait 5m
-
-# Perform a dry run to validate inputs without creating the bucket
-replicated cluster addon create object-store 05929b24 --bucket-prefix mybucket --dry-run
-
-# Create an object store bucket and output the result in JSON format
-replicated cluster addon create object-store 05929b24 --bucket-prefix mybucket --output json
-
-# Create an object store bucket with a custom prefix and wait for 10 minutes
-replicated cluster addon create object-store 05929b24 --bucket-prefix custom-prefix --wait 10m
-```
-
-### Options
-
-```
- --bucket-prefix string A prefix for the bucket name to be created (required)
- --dry-run Simulate creation to verify that your inputs are valid without actually creating an add-on
- -h, --help help for object-store
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
- --wait duration Wait duration for add-on to be ready before exiting (leave empty to not wait)
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster addon create](replicated-cli-cluster-addon-create) - Create cluster add-ons.
-
diff --git a/docs/reference/replicated-cli-cluster-addon-create.mdx b/docs/reference/replicated-cli-cluster-addon-create.mdx
deleted file mode 100644
index b53724514d..0000000000
--- a/docs/reference/replicated-cli-cluster-addon-create.mdx
+++ /dev/null
@@ -1,36 +0,0 @@
-# replicated cluster addon create
-
-Create cluster add-ons.
-
-### Synopsis
-
-Create new add-ons for a cluster. This command allows you to add functionality or services to a cluster by provisioning the required add-ons.
-
-### Examples
-
-```
-# Create an object store bucket add-on for a cluster
-replicated cluster addon create object-store CLUSTER_ID --bucket-prefix mybucket
-
-# Perform a dry run for creating an object store add-on
-replicated cluster addon create object-store CLUSTER_ID --bucket-prefix mybucket --dry-run
-```
-
-### Options
-
-```
- -h, --help help for create
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster addon](replicated-cli-cluster-addon) - Manage cluster add-ons.
-* [replicated cluster addon create object-store](replicated-cli-cluster-addon-create-object-store) - Create an object store bucket for a cluster.
-
diff --git a/docs/reference/replicated-cli-cluster-addon-ls.mdx b/docs/reference/replicated-cli-cluster-addon-ls.mdx
deleted file mode 100644
index c783aa968a..0000000000
--- a/docs/reference/replicated-cli-cluster-addon-ls.mdx
+++ /dev/null
@@ -1,51 +0,0 @@
-# replicated cluster addon ls
-
-List cluster add-ons for a cluster.
-
-### Synopsis
-
-The 'cluster addon ls' command allows you to list all add-ons for a specific cluster. This command provides a detailed overview of the add-ons currently installed on the cluster, including their status and any relevant configuration details.
-
-This can be useful for monitoring the health and configuration of add-ons or performing troubleshooting tasks.
-
-```
-replicated cluster addon ls CLUSTER_ID [flags]
-```
-
-### Aliases
-
-```
-ls, list
-```
-
-### Examples
-
-```
-# List add-ons for a cluster with default table output
-replicated cluster addon ls CLUSTER_ID
-
-# List add-ons for a cluster with JSON output
-replicated cluster addon ls CLUSTER_ID --output json
-
-# List add-ons for a cluster with wide table output
-replicated cluster addon ls CLUSTER_ID --output wide
-```
-
-### Options
-
-```
- -h, --help help for ls
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster addon](replicated-cli-cluster-addon) - Manage cluster add-ons.
-
diff --git a/docs/reference/replicated-cli-cluster-addon-rm.mdx b/docs/reference/replicated-cli-cluster-addon-rm.mdx
deleted file mode 100644
index aa70118cb5..0000000000
--- a/docs/reference/replicated-cli-cluster-addon-rm.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
-# replicated cluster addon rm
-
-Remove cluster add-on by ID.
-
-### Synopsis
-
-The 'cluster addon rm' command allows you to remove a specific add-on from a cluster by specifying the cluster ID and the add-on ID.
-
-This command is useful when you want to deprovision an add-on that is no longer needed or when troubleshooting issues related to specific add-ons. The add-on will be removed immediately, and you will receive confirmation upon successful removal.
-
-```
-replicated cluster addon rm CLUSTER_ID --id ADDON_ID [flags]
-```
-
-### Aliases
-
-```
-rm, delete
-```
-
-### Examples
-
-```
-# Remove an add-on with ID 'abc123' from cluster 'cluster456'
-replicated cluster addon rm cluster456 --id abc123
-```
-
-### Options
-
-```
- -h, --help help for rm
- --id string The ID of the cluster add-on to remove (required)
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster addon](replicated-cli-cluster-addon) - Manage cluster add-ons.
-
diff --git a/docs/reference/replicated-cli-cluster-addon.mdx b/docs/reference/replicated-cli-cluster-addon.mdx
deleted file mode 100644
index 23f033232d..0000000000
--- a/docs/reference/replicated-cli-cluster-addon.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
-# replicated cluster addon
-
-Manage cluster add-ons.
-
-### Synopsis
-
-The 'cluster addon' command allows you to manage add-ons installed on a test cluster. Add-ons are additional components or services that can be installed and configured to enhance or extend the functionality of the cluster.
-
-You can use various subcommands to create, list, remove, or check the status of add-ons on a cluster. This command is useful for adding databases, object storage, monitoring, security, or other specialized tools to your cluster environment.
-
-### Examples
-
-```
-# List all add-ons installed on a cluster
-replicated cluster addon ls CLUSTER_ID
-
-# Remove an add-on from a cluster
-replicated cluster addon rm CLUSTER_ID --id ADDON_ID
-
-# Create an object store bucket add-on for a cluster
-replicated cluster addon create object-store CLUSTER_ID --bucket-prefix mybucket
-
-# List add-ons with JSON output
-replicated cluster addon ls CLUSTER_ID --output json
-```
-
-### Options
-
-```
- -h, --help help for addon
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-* [replicated cluster addon create](replicated-cli-cluster-addon-create) - Create cluster add-ons.
-* [replicated cluster addon ls](replicated-cli-cluster-addon-ls) - List cluster add-ons for a cluster.
-* [replicated cluster addon rm](replicated-cli-cluster-addon-rm) - Remove cluster add-on by ID.
-
diff --git a/docs/reference/replicated-cli-cluster-create.mdx b/docs/reference/replicated-cli-cluster-create.mdx
deleted file mode 100644
index b6efdfd2c6..0000000000
--- a/docs/reference/replicated-cli-cluster-create.mdx
+++ /dev/null
@@ -1,79 +0,0 @@
-# replicated cluster create
-
-Create test clusters.
-
-### Synopsis
-
-The 'cluster create' command provisions a new test cluster with the specified Kubernetes distribution and configuration. You can customize the cluster's size, version, node groups, disk space, IP family, and other parameters.
-
-This command supports creating clusters on multiple Kubernetes distributions, including setting up node groups with different instance types and counts. You can also specify a TTL (Time-To-Live) to automatically terminate the cluster after a set duration.
-
-Use the '--dry-run' flag to simulate the creation process and get an estimated cost without actually provisioning the cluster.
-
-```
-replicated cluster create [flags]
-```
-
-### Examples
-
-```
-# Create a new cluster with basic configuration
-replicated cluster create --distribution eks --version 1.21 --nodes 3 --instance-type t3.large --disk 100 --ttl 24h
-
-# Create a cluster with a custom node group
-replicated cluster create --distribution eks --version 1.21 --nodegroup name=workers,instance-type=t3.large,nodes=5 --ttl 24h
-
-# Simulate cluster creation (dry-run)
-replicated cluster create --distribution eks --version 1.21 --nodes 3 --disk 100 --ttl 24h --dry-run
-
-# Create a cluster with autoscaling configuration
-replicated cluster create --distribution eks --version 1.21 --min-nodes 2 --max-nodes 5 --instance-type t3.large --ttl 24h
-
-# Create a cluster with multiple node groups
-replicated cluster create --distribution eks --version 1.21 \
---nodegroup name=workers,instance-type=t3.large,nodes=3 \
---nodegroup name=cpu-intensive,instance-type=c5.2xlarge,nodes=2 \
---ttl 24h
-
-# Create a cluster with custom tags
-replicated cluster create --distribution eks --version 1.21 --nodes 3 --tag env=test --tag project=demo --ttl 24h
-
-# Create a cluster with addons
-replicated cluster create --distribution eks --version 1.21 --nodes 3 --addon object-store --ttl 24h
-```
-
-### Options
-
-```
- --addon stringArray Addons to install on the cluster (can be specified multiple times)
- --bucket-prefix string A prefix for the bucket name to be created (required by '--addon object-store')
- --disk int Disk Size (GiB) to request per node (default 50)
- --distribution string Kubernetes distribution of the cluster to provision
- --dry-run Dry run
- -h, --help help for create
- --instance-type string The type of instance to use (e.g. m6i.large)
- --ip-family string IP Family to use for the cluster (ipv4|ipv6|dual).
- --license-id string License ID to use for the installation (required for Embedded Cluster distribution)
- --max-nodes string Maximum Node count (non-negative number) (only for EKS, AKS and GKE clusters).
- --min-nodes string Minimum Node count (non-negative number) (only for EKS, AKS and GKE clusters).
- --name string Cluster name (defaults to random name)
- --nodegroup stringArray Node group to create (name=?,instance-type=?,nodes=?,min-nodes=?,max-nodes=?,disk=? format, can be specified multiple times). For each nodegroup, at least one flag must be specified. The flags min-nodes and max-nodes are mutually dependent.
- --nodes int Node count (default 1)
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
- --tag stringArray Tag to apply to the cluster (key=value format, can be specified multiple times)
- --ttl string Cluster TTL (duration, max 48h)
- --version string Kubernetes version to provision (format is distribution dependent)
- --wait duration Wait duration for cluster to be ready (leave empty to not wait)
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-
diff --git a/docs/reference/replicated-cli-cluster-kubeconfig.mdx b/docs/reference/replicated-cli-cluster-kubeconfig.mdx
deleted file mode 100644
index 45449fba2e..0000000000
--- a/docs/reference/replicated-cli-cluster-kubeconfig.mdx
+++ /dev/null
@@ -1,56 +0,0 @@
-# replicated cluster kubeconfig
-
-Download credentials for a test cluster.
-
-### Synopsis
-
-The 'cluster kubeconfig' command downloads the credentials (kubeconfig) required to access a test cluster. You can either merge these credentials into your existing kubeconfig file or save them as a new file.
-
-This command ensures that the kubeconfig is correctly configured for use with your Kubernetes tools. You can specify the cluster by ID or by name. Additionally, the kubeconfig can be written to a specific file path or printed to stdout.
-
-You can also use this command to automatically update your current Kubernetes context with the downloaded credentials.
-
-```
-replicated cluster kubeconfig [ID] [flags]
-```
-
-### Examples
-
-```
-# Download and merge kubeconfig into your existing configuration
-replicated cluster kubeconfig CLUSTER_ID
-
-# Save the kubeconfig to a specific file
-replicated cluster kubeconfig CLUSTER_ID --output-path ./kubeconfig
-
-# Print the kubeconfig to stdout
-replicated cluster kubeconfig CLUSTER_ID --stdout
-
-# Download kubeconfig for a cluster by name
-replicated cluster kubeconfig --name "My Cluster"
-
-# Download kubeconfig for a cluster by ID
-replicated cluster kubeconfig --id CLUSTER_ID
-```
-
-### Options
-
-```
- -h, --help help for kubeconfig
- --id string id of the cluster to download credentials for (when name is not provided)
- --name string name of the cluster to download credentials for (when id is not provided)
- --output-path string path to kubeconfig file to write to, if not provided, it will be merged into your existing kubeconfig
- --stdout write kubeconfig to stdout
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-
diff --git a/docs/reference/replicated-cli-cluster-ls.mdx b/docs/reference/replicated-cli-cluster-ls.mdx
deleted file mode 100644
index fcf66d8886..0000000000
--- a/docs/reference/replicated-cli-cluster-ls.mdx
+++ /dev/null
@@ -1,66 +0,0 @@
-# replicated cluster ls
-
-List test clusters.
-
-### Synopsis
-
-The 'cluster ls' command lists all test clusters. This command provides information about the clusters, such as their status, name, distribution, version, and creation time. The output can be formatted in different ways, depending on your needs.
-
-You can filter the list of clusters by time range and status (e.g., show only terminated clusters). You can also watch clusters in real-time, which updates the list every few seconds.
-
-Clusters that have been deleted will be shown with a 'deleted' status.
-
-```
-replicated cluster ls [flags]
-```
-
-### Aliases
-
-```
-ls, list
-```
-
-### Examples
-
-```
-# List all clusters with default table output
-replicated cluster ls
-
-# Show clusters created after a specific date
-replicated cluster ls --start-time 2023-01-01T00:00:00Z
-
-# Watch for real-time updates
-replicated cluster ls --watch
-
-# List clusters with JSON output
-replicated cluster ls --output json
-
-# List only terminated clusters
-replicated cluster ls --show-terminated
-
-# List clusters with wide table output
-replicated cluster ls --output wide
-```
-
-### Options
-
-```
- --end-time string end time for the query (Format: 2006-01-02T15:04:05Z)
- -h, --help help for ls
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
- --show-terminated when set, only show terminated clusters
- --start-time string start time for the query (Format: 2006-01-02T15:04:05Z)
- -w, --watch watch clusters
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-
diff --git a/docs/reference/replicated-cli-cluster-nodegroup-ls.mdx b/docs/reference/replicated-cli-cluster-nodegroup-ls.mdx
deleted file mode 100644
index 48be3f0161..0000000000
--- a/docs/reference/replicated-cli-cluster-nodegroup-ls.mdx
+++ /dev/null
@@ -1,53 +0,0 @@
-# replicated cluster nodegroup ls
-
-List node groups for a cluster.
-
-### Synopsis
-
-The 'cluster nodegroup ls' command lists all the node groups associated with a given cluster. Each node group defines a specific set of nodes with particular configurations, such as instance types and scaling options.
-
-You can view information about the node groups within the specified cluster, including their ID, name, node count, and other configuration details.
-
-You must provide the cluster ID to list its node groups.
-
-```
-replicated cluster nodegroup ls [ID] [flags]
-```
-
-### Aliases
-
-```
-ls, list
-```
-
-### Examples
-
-```
-# List all node groups in a cluster with default table output
-replicated cluster nodegroup ls CLUSTER_ID
-
-# List node groups with JSON output
-replicated cluster nodegroup ls CLUSTER_ID --output json
-
-# List node groups with wide table output
-replicated cluster nodegroup ls CLUSTER_ID --output wide
-```
-
-### Options
-
-```
- -h, --help help for ls
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster nodegroup](replicated-cli-cluster-nodegroup) - Manage node groups for clusters.
-
diff --git a/docs/reference/replicated-cli-cluster-nodegroup.mdx b/docs/reference/replicated-cli-cluster-nodegroup.mdx
deleted file mode 100644
index d8d3da31a3..0000000000
--- a/docs/reference/replicated-cli-cluster-nodegroup.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
-# replicated cluster nodegroup
-
-Manage node groups for clusters.
-
-### Synopsis
-
-The 'cluster nodegroup' command provides functionality to manage node groups within a cluster. This command allows you to list node groups in a Kubernetes or VM-based cluster.
-
-Node groups define a set of nodes with specific configurations, such as instance types, node counts, or scaling rules. You can use subcommands to perform various actions on node groups.
-
-### Examples
-
-```
-# List all node groups for a cluster
-replicated cluster nodegroup ls CLUSTER_ID
-```
-
-### Options
-
-```
- -h, --help help for nodegroup
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-* [replicated cluster nodegroup ls](replicated-cli-cluster-nodegroup-ls) - List node groups for a cluster.
-
diff --git a/docs/reference/replicated-cli-cluster-port-expose.mdx b/docs/reference/replicated-cli-cluster-port-expose.mdx
deleted file mode 100644
index c88c6e9050..0000000000
--- a/docs/reference/replicated-cli-cluster-port-expose.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
-# replicated cluster port expose
-
-Expose a port on a cluster to the public internet.
-
-### Synopsis
-
-The 'cluster port expose' command is used to expose a specified port on a cluster to the public internet. When exposing a port, the command automatically creates a DNS entry and, if using the "https" protocol, provisions a TLS certificate for secure communication.
-
-You can also create a wildcard DNS entry and TLS certificate by specifying the "--wildcard" flag. Please note that creating a wildcard certificate may take additional time.
-
-This command supports different protocols including "http", "https", "ws", and "wss" for web traffic and web socket communication.
-
-NOTE: Currently, this feature only supports VM-based cluster distributions.
-
-```
-replicated cluster port expose CLUSTER_ID --port PORT [flags]
-```
-
-### Examples
-
-```
-# Expose port 8080 with HTTPS protocol and wildcard DNS
-replicated cluster port expose CLUSTER_ID --port 8080 --protocol https --wildcard
-
-# Expose port 3000 with HTTP protocol
-replicated cluster port expose CLUSTER_ID --port 3000 --protocol http
-
-# Expose port 8080 with multiple protocols
-replicated cluster port expose CLUSTER_ID --port 8080 --protocol http,https
-
-# Expose port 8080 and display the result in JSON format
-replicated cluster port expose CLUSTER_ID --port 8080 --protocol https --output json
-```
-
-### Options
-
-```
- -h, --help help for expose
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
- --port int Port to expose (required)
- --protocol strings Protocol to expose (valid values are "http", "https", "ws" and "wss") (default [http,https])
- --wildcard Create a wildcard DNS entry and TLS certificate for this port
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster port](replicated-cli-cluster-port) - Manage cluster ports.
-
diff --git a/docs/reference/replicated-cli-cluster-port-ls.mdx b/docs/reference/replicated-cli-cluster-port-ls.mdx
deleted file mode 100644
index 2216e6df41..0000000000
--- a/docs/reference/replicated-cli-cluster-port-ls.mdx
+++ /dev/null
@@ -1,51 +0,0 @@
-# replicated cluster port ls
-
-List cluster ports for a cluster.
-
-### Synopsis
-
-The 'cluster port ls' command lists all the ports configured for a specific cluster. You must provide the cluster ID to retrieve and display the ports.
-
-This command is useful for viewing the current port configurations, protocols, and other related settings of your test cluster. The output format can be customized to suit your needs, and the available formats include table, JSON, and wide views.
-
-```
-replicated cluster port ls CLUSTER_ID [flags]
-```
-
-### Aliases
-
-```
-ls, list
-```
-
-### Examples
-
-```
-# List ports for a cluster in the default table format
-replicated cluster port ls CLUSTER_ID
-
-# List ports for a cluster in JSON format
-replicated cluster port ls CLUSTER_ID --output json
-
-# List ports for a cluster in wide format
-replicated cluster port ls CLUSTER_ID --output wide
-```
-
-### Options
-
-```
- -h, --help help for ls
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster port](replicated-cli-cluster-port) - Manage cluster ports.
-
diff --git a/docs/reference/replicated-cli-cluster-port-rm.mdx b/docs/reference/replicated-cli-cluster-port-rm.mdx
deleted file mode 100644
index d312cd7039..0000000000
--- a/docs/reference/replicated-cli-cluster-port-rm.mdx
+++ /dev/null
@@ -1,54 +0,0 @@
-# replicated cluster port rm
-
-Remove cluster port by ID.
-
-### Synopsis
-
-The 'cluster port rm' command removes a specific port from a cluster. You must provide either the ID of the port or the port number and protocol(s) to remove.
-
-This command is useful for managing the network settings of your test clusters by allowing you to clean up unused or incorrect ports. After removing a port, the updated list of ports will be displayed.
-
-Note that you can only use either the port ID or port number when removing a port, not both at the same time.
-
-```
-replicated cluster port rm CLUSTER_ID --id PORT_ID [flags]
-```
-
-### Aliases
-
-```
-rm, delete
-```
-
-### Examples
-
-```
-# Remove a port using its ID
-replicated cluster port rm CLUSTER_ID --id PORT_ID
-
-# Remove a port using its number (deprecated)
-replicated cluster port rm CLUSTER_ID --port 8080 --protocol http,https
-
-# Remove a port and display the result in JSON format
-replicated cluster port rm CLUSTER_ID --id PORT_ID --output json
-```
-
-### Options
-
-```
- -h, --help help for rm
- --id string ID of the port to remove (required)
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster port](replicated-cli-cluster-port) - Manage cluster ports.
-
diff --git a/docs/reference/replicated-cli-cluster-port.mdx b/docs/reference/replicated-cli-cluster-port.mdx
deleted file mode 100644
index 90e1946d0c..0000000000
--- a/docs/reference/replicated-cli-cluster-port.mdx
+++ /dev/null
@@ -1,43 +0,0 @@
-# replicated cluster port
-
-Manage cluster ports.
-
-### Synopsis
-
-The 'cluster port' command is a parent command for managing ports in a cluster. It allows users to list, remove, or expose specific ports used by the cluster. Use the subcommands (such as 'ls', 'rm', and 'expose') to manage port configurations effectively.
-
-This command provides flexibility for handling ports in various test clusters, ensuring efficient management of cluster networking settings.
-
-### Examples
-
-```
-# List all exposed ports in a cluster
-replicated cluster port ls [CLUSTER_ID]
-
-# Remove an exposed port from a cluster
-replicated cluster port rm [CLUSTER_ID] [PORT]
-
-# Expose a new port in a cluster
-replicated cluster port expose [CLUSTER_ID] [PORT]
-```
-
-### Options
-
-```
- -h, --help help for port
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-* [replicated cluster port expose](replicated-cli-cluster-port-expose) - Expose a port on a cluster to the public internet.
-* [replicated cluster port ls](replicated-cli-cluster-port-ls) - List cluster ports for a cluster.
-* [replicated cluster port rm](replicated-cli-cluster-port-rm) - Remove cluster port by ID.
-
diff --git a/docs/reference/replicated-cli-cluster-prepare.mdx b/docs/reference/replicated-cli-cluster-prepare.mdx
deleted file mode 100644
index 31ee8b84e5..0000000000
--- a/docs/reference/replicated-cli-cluster-prepare.mdx
+++ /dev/null
@@ -1,69 +0,0 @@
-# replicated cluster prepare
-
-Prepare cluster for testing.
-
-### Synopsis
-
-The 'cluster prepare' command provisions a Kubernetes cluster and installs an application using a Helm chart or KOTS YAML configuration.
-
-This command is designed to be used in CI environments to prepare a cluster for testing by deploying a Helm chart or KOTS application with entitlements and custom values. You can specify the cluster configuration, such as the Kubernetes distribution, version, node count, and instance type, and then install your application automatically.
-
-Alternatively, if you prefer deploying KOTS applications, you can specify YAML manifests for the release and use the '--shared-password' flag for the KOTS admin console.
-
-You can also pass entitlement values to configure the cluster's customer entitlements.
-
-Note:
-- The '--chart' flag cannot be used with '--yaml', '--yaml-file', or '--yaml-dir'.
-- If deploying a Helm chart, use the '--set' flags to pass chart values. When deploying a KOTS application, the '--shared-password' flag is required.
-
-```
-replicated cluster prepare [flags]
-```
-
-### Examples
-
-```
-replicated cluster prepare --distribution eks --version 1.27 --instance-type c6.xlarge --node-count 3 --chart ./your-chart.tgz --values ./values.yaml --set chart-key=value --set chart-key2=value2
-```
-
-### Options
-
-```
- --app-ready-timeout duration Timeout to wait for the application to be ready. Must be in Go duration format (e.g., 10s, 2m). (default 5m0s)
- --chart string Path to the helm chart package to deploy
- --cluster-id string The ID of an existing cluster to use instead of creating a new one.
- --config-values-file string Path to a manifest containing config values (must be apiVersion: kots.io/v1beta1, kind: ConfigValues).
- --disk int Disk Size (GiB) to request per node. (default 50)
- --distribution string Kubernetes distribution of the cluster to provision
- --entitlements strings The entitlements to set on the customer. Can be specified multiple times.
- -h, --help help for prepare
- --instance-type string the type of instance to use clusters (e.g. x5.xlarge)
- --name string Cluster name
- --namespace string The namespace into which to deploy the KOTS application or Helm chart. (default "default")
- --node-count int Node count. (default 1)
- --set stringArray Set values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2).
- --set-file stringArray Set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2).
- --set-json stringArray Set JSON values on the command line (can specify multiple or separate values with commas: key1=jsonval1,key2=jsonval2).
- --set-literal stringArray Set a literal STRING value on the command line.
- --set-string stringArray Set STRING values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2).
- --shared-password string Shared password for the KOTS admin console.
- --ttl string Cluster TTL (duration, max 48h)
- --values strings Specify values in a YAML file or a URL (can specify multiple).
- --version string Kubernetes version to provision (format is distribution dependent)
- --wait duration Wait duration for cluster to be ready. (default 5m0s)
- --yaml string The YAML config for this release. Use '-' to read from stdin. Cannot be used with the --yaml-file flag.
- --yaml-dir string The directory containing multiple yamls for a KOTS release. Cannot be used with the --yaml flag.
- --yaml-file string The YAML config for this release. Cannot be used with the --yaml flag.
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-
diff --git a/docs/reference/replicated-cli-cluster-rm.mdx b/docs/reference/replicated-cli-cluster-rm.mdx
deleted file mode 100644
index 603cda82d6..0000000000
--- a/docs/reference/replicated-cli-cluster-rm.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
-# replicated cluster rm
-
-Remove test clusters.
-
-### Synopsis
-
-The 'rm' command removes test clusters immediately.
-
-You can remove clusters by specifying a cluster ID, or by using other criteria such as cluster names or tags. Alternatively, you can remove all clusters in your account at once.
-
-This command can also be used in a dry-run mode to simulate the removal without actually deleting anything.
-
-You cannot mix the use of cluster IDs with other options like removing by name, tag, or removing all clusters at once.
-
-```
-replicated cluster rm ID [ID …] [flags]
-```
-
-### Aliases
-
-```
-rm, delete
-```
-
-### Examples
-
-```
-# Remove a specific cluster by ID
-replicated cluster rm CLUSTER_ID
-
-# Remove all clusters
-replicated cluster rm --all
-```
-
-### Options
-
-```
- --all remove all clusters
- --dry-run Dry run
- -h, --help help for rm
- --name stringArray Name of the cluster to remove (can be specified multiple times)
- --tag stringArray Tag of the cluster to remove (key=value format, can be specified multiple times)
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-
diff --git a/docs/reference/replicated-cli-cluster-shell.mdx b/docs/reference/replicated-cli-cluster-shell.mdx
deleted file mode 100644
index d9cf2db027..0000000000
--- a/docs/reference/replicated-cli-cluster-shell.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
-# replicated cluster shell
-
-Open a new shell with kubeconfig configured.
-
-### Synopsis
-
-The 'shell' command opens a new shell session with the kubeconfig configured for the specified test cluster. This allows you to have immediate kubectl access to the cluster within the shell environment.
-
-You can either specify the cluster ID directly or provide the cluster name to resolve the corresponding cluster ID. The shell will inherit your existing environment and add the necessary kubeconfig context for interacting with the Kubernetes cluster.
-
-Once inside the shell, you can use 'kubectl' to interact with the cluster. To exit the shell, press Ctrl-D or type 'exit'. When the shell closes, the kubeconfig will be reset back to your default configuration.
-
-```
-replicated cluster shell [ID] [flags]
-```
-
-### Examples
-
-```
-# Open a shell for a cluster by ID
-replicated cluster shell CLUSTER_ID
-
-# Open a shell for a cluster by name
-replicated cluster shell --name "My Cluster"
-```
-
-### Options
-
-```
- -h, --help help for shell
- --id string id of the cluster to have kubectl access to (when name is not provided)
- --name string name of the cluster to have kubectl access to.
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster](replicated-cli-cluster) - Manage test Kubernetes clusters.
-
diff --git a/docs/reference/replicated-cli-cluster-update-nodegroup.mdx b/docs/reference/replicated-cli-cluster-update-nodegroup.mdx
deleted file mode 100644
index d6f4f3aca6..0000000000
--- a/docs/reference/replicated-cli-cluster-update-nodegroup.mdx
+++ /dev/null
@@ -1,49 +0,0 @@
-# replicated cluster update nodegroup
-
-Update a nodegroup for a test cluster.
-
-### Synopsis
-
-The 'nodegroup' command allows you to update the configuration of a nodegroup within a test cluster. You can update attributes like the number of nodes, minimum and maximum node counts for autoscaling, and more.
-
-If you do not provide the nodegroup ID, the command will try to resolve it based on the nodegroup name provided.
-
-```
-replicated cluster update nodegroup [ID] [flags]
-```
-
-### Examples
-
-```
-# Update the number of nodes in a nodegroup
-replicated cluster update nodegroup CLUSTER_ID --nodegroup-id NODEGROUP_ID --nodes 3
-
-# Update the autoscaling limits for a nodegroup
-replicated cluster update nodegroup CLUSTER_ID --nodegroup-id NODEGROUP_ID --min-nodes 2 --max-nodes 5
-```
-
-### Options
-
-```
- -h, --help help for nodegroup
- --max-nodes string The maximum number of nodes in the nodegroup
- --min-nodes string The minimum number of nodes in the nodegroup
- --nodegroup-id string The ID of the nodegroup to update
- --nodegroup-name string The name of the nodegroup to update
- --nodes int The number of nodes in the nodegroup
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --id string id of the cluster to update (when name is not provided)
- --name string Name of the cluster to update.
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster update](replicated-cli-cluster-update) - Update cluster settings.
-
diff --git a/docs/reference/replicated-cli-cluster-update-ttl.mdx b/docs/reference/replicated-cli-cluster-update-ttl.mdx
deleted file mode 100644
index 5805166e1d..0000000000
--- a/docs/reference/replicated-cli-cluster-update-ttl.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
-# replicated cluster update ttl
-
-Update TTL for a test cluster.
-
-### Synopsis
-
-The 'ttl' command allows you to update the Time-To-Live (TTL) of a test cluster. The TTL represents the duration for which the cluster will remain active before it is automatically terminated. The duration starts from the moment the cluster becomes active. You must provide a valid duration, with a maximum limit of 48 hours.
-
-```
-replicated cluster update ttl [ID] [flags]
-```
-
-### Examples
-
-```
-# Update the TTL for a specific cluster
-replicated cluster update ttl CLUSTER_ID --ttl 24h
-```
-
-### Options
-
-```
- -h, --help help for ttl
- --output string The output format to use. One of: json|table|wide (default: table) (default "table")
- --ttl string Update TTL which starts from the moment the cluster is running (duration, max 48h).
-```
-
-### Options inherited from parent commands
-
-```
- --app string The app slug or app id to use in all calls
- --id string id of the cluster to update (when name is not provided)
- --name string Name of the cluster to update.
- --token string The API token to use to access your app in the Vendor API
-```
-
-### SEE ALSO
-
-* [replicated cluster update](replicated-cli-cluster-update) - Update cluster settings.
-
diff --git a/docs/reference/replicated-cli-cluster-update.mdx b/docs/reference/replicated-cli-cluster-update.mdx
deleted file mode 100644
index fabe515263..0000000000
--- a/docs/reference/replicated-cli-cluster-update.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
-# replicated cluster update
-
-Update cluster settings.
-
-### Synopsis
-
-The 'update' command allows you to update various settings of a test cluster, such as its name or ID.
-
-You can either specify the cluster ID directly or provide the cluster name, and the command will resolve the corresponding cluster ID. This allows you to modify the cluster's configuration based on the unique identifier or the name of the cluster.
-
-### Examples
-
-```
-# Update a cluster using its ID
-replicated cluster update --id
-
- [View a larger version of this image](/images/customer-expiration-policy.png)
-
-1. Install the Replicated SDK as a standalone component in your cluster. This is called _integration mode_. Installing in integration mode allows you to develop locally against the SDK API without needing to create releases for your application in the vendor portal. See [Developing Against the SDK API](/vendor/replicated-sdk-development).
-
-1. In your application, use the `/api/v1/license/fields/expires_at` endpoint to get the `expires_at` field that you defined in the previous step.
-
- **Example:**
-
- ```bash
- curl replicated:3000/api/v1/license/fields/expires_at
- ```
-
- ```json
- {
- "name": "expires_at",
- "title": "Expiration",
- "description": "License Expiration",
- "value": "2023-05-30T00:00:00Z",
- "valueType": "String",
- "signature": {
- "v1": "c6rsImpilJhW0eK+Kk37jeRQvBpvWgJeXK2M..."
- }
- }
- ```
-
-1. Add logic to your application to revoke access if the current date and time is more recent than the expiration date of the license.
-
-1. (Recommended) Use signature verification in your application to ensure the integrity of the license field. See [Verifying License Field Signatures with the Replicated SDK API](/vendor/licenses-verify-fields-sdk-api).
diff --git a/docs/reference/replicated-sdk-customizing.md b/docs/reference/replicated-sdk-customizing.md
new file mode 100644
index 0000000000..05533c8fa5
--- /dev/null
+++ b/docs/reference/replicated-sdk-customizing.md
@@ -0,0 +1,210 @@
+# Customizing the Replicated SDK
+
+This topic describes various ways to customize the Replicated SDK, including customizing RBAC, setting environment variables, and adding tolerations.
+
+## Customize RBAC for the SDK
+
+This section describes role-based access control (RBAC) for the Replicated SDK, including the default RBAC, minimum RBAC requirements, and how to install the SDK with custom RBAC.
+
+### Default RBAC
+
+The SDK creates default Role, RoleBinding, and ServiceAccount objects during installation. The default Role allows the SDK to get, list, and watch all resources in the namespace, to create Secrets, and to update the `replicated` and `replicated-instance-report` Secrets:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ labels:
+ {{- include "replicated.labels" . | nindent 4 }}
+ name: replicated-role
+rules:
+- apiGroups:
+ - '*'
+ resources:
+ - '*'
+ verbs:
+ - 'get'
+ - 'list'
+ - 'watch'
+- apiGroups:
+ - ''
+ resources:
+ - 'secrets'
+ verbs:
+ - 'create'
+- apiGroups:
+ - ''
+ resources:
+ - 'secrets'
+ verbs:
+ - 'update'
+ resourceNames:
+ - replicated
+ - replicated-instance-report
+ - replicated-custom-app-metrics-report
+```
+
+### Minimum RBAC Requirements
+
+The SDK requires the following minimum RBAC permissions:
+* Create Secrets.
+* Get and update Secrets named `replicated`, `replicated-instance-report`, and `replicated-custom-app-metrics-report`.
+* The SDK requires the following minimum RBAC permissions for status informers:
+ * If you defined custom status informers, then the SDK must have permissions to get, list, and watch all the resources listed in the `replicated.statusInformers` array in your Helm chart `values.yaml` file.
+ * If you did _not_ define custom status informers, then the SDK must have permissions to get, list, and watch the following resources:
+ * Deployments
+ * Daemonsets
+ * Ingresses
+ * PersistentVolumeClaims
+ * Statefulsets
+ * Services
+ * For any Ingress resources used as status informers, the SDK requires `get` permissions for the Service resources listed in the `backend.Service.Name` field of the Ingress resource.
+ * For any Daemonset and Statefulset resources used as status informers, the SDK requires `list` permissions for pods in the namespace.
+ * For any Service resources used as status informers, the SDK requires `get` permissions for Endpoint resources with the same name as the service.
+
+ The Replicated Vendor Portal uses status informers to provide application status data. For more information, see [Helm Installations](/vendor/insights-app-status#helm-installations) in _Enabling and Understanding Application Status_.
+### Install the SDK with Custom RBAC
+
+#### Custom ServiceAccount
+
+To use the SDK with custom RBAC permissions, provide the name for a custom ServiceAccount object during installation. When a service account is provided, the SDK uses the RBAC permissions granted to the service account and does not create the default Role, RoleBinding, or ServiceAccount objects.
+
+To install the SDK with custom RBAC:
+
+1. Create custom Role, RoleBinding, and ServiceAccount objects. The Role must meet the minimum requirements described in [Minimum RBAC Requirements](#minimum-rbac-requirements) above.
+1. During installation, provide the name of the service account that you created by including `--set replicated.serviceAccountName=CUSTOM_SERVICEACCOUNT_NAME`.
+
+ **Example**:
+
+ ```
+ helm install wordpress oci://registry.replicated.com/my-app/beta/wordpress --set replicated.serviceAccountName=mycustomserviceaccount
+ ```
+
+ For more information about installing with Helm, see [Installing with Helm](/vendor/install-with-helm).
+
+#### Custom ClusterRole
+
+To use the SDK with an existing ClusterRole, provide the name for a custom ClusterRole object during installation. When a cluster role is provided, the SDK uses the RBAC permissions granted to the cluster role and does not create the default RoleBinding. Instead, the SDK creates a ClusterRoleBinding as well as a ServiceAccount object.
+
+To install the SDK with a custom ClusterRole:
+
+1. Create a custom ClusterRole object. The ClusterRole must meet at least the minimum requirements described in [Minimum RBAC Requirements](#minimum-rbac-requirements) above. However, it can also provide additional permissions that can be used by the SDK, such as listing cluster Nodes.
+1. During installation, provide the name of the cluster role that you created by including `--set replicated.clusterRole=CUSTOM_CLUSTERROLE_NAME`.
+
+ **Example**:
+
+ ```
+ helm install wordpress oci://registry.replicated.com/my-app/beta/wordpress --set replicated.clusterRole=mycustomclusterrole
+ ```
+
+ For more information about installing with Helm, see [Installing with Helm](/vendor/install-with-helm).
+
+## Set Environment Variables {#env-var}
+
+The Replicated SDK provides a `replicated.extraEnv` value that allows users to set additional environment variables for the deployment that are not exposed as Helm values.
+
+This ensures that users can set the environment variables that they require without the SDK Helm chart needing to be modified to expose the values. For example, if the SDK is running behind an HTTP proxy server, then the user could set `HTTP_PROXY` or `HTTPS_PROXY` environment variables to provide the hostname or IP address of their proxy server.
+
+To add environment variables to the Replicated SDK deployment, include the `replicated.extraEnv` array in your Helm chart `values.yaml` file. The `replicated.extraEnv` array accepts a list of environment variables in the following format:
+
+```yaml
+# Helm chart values.yaml
+
+replicated:
+ extraEnv:
+ - name: ENV_VAR_NAME
+ value: ENV_VAR_VALUE
+```
+
+:::note
+If the `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` variables are configured with the [kots install](/reference/kots-cli-install) command, these variables will also be set automatically in the Replicated SDK.
+:::
+
+**Example**:
+
+```yaml
+# Helm chart values.yaml
+
+replicated:
+ extraEnv:
+ - name: MY_ENV_VAR
+ value: my-value
+ - name: MY_ENV_VAR_2
+ value: my-value-2
+```
+
+## Custom Certificate Authority
+
+When installing the Replicated SDK behind a proxy server that terminates TLS and injects a custom certificate, you must provide the CA to the SDK. This can be done by storing the CA in a ConfigMap or a Secret prior to installation and providing appropriate values during installation.
+
+### Using a ConfigMap
+
+To use a CA stored in a ConfigMap:
+
+1. Create a ConfigMap and the CA as the data value. Note that name of the ConfigMap and data key can be anything.
+ ```bash
+ kubectl -n By default, no volumes are included in the backup. If any pods mount a volume that should be backed up, you must configure the backup with an annotation listing the specific volumes to include in the backup.
+| podAnnotation | +Description | +
|---|---|
backup.velero.io/backup-volumes |
+ A comma-separated list of volumes from the Pod to include in the backup. The primary data volume is not included in this field because data is exported using the backup hook. | +
pre.hook.backup.velero.io/command |
+ A stringified JSON array containing the command for the backup hook.
+ This command is a pg_dump from the running database to the backup volume. |
+
pre.hook.backup.velero.io/timeout |
+ A duration for the maximum time to let this script run. | +
post.hook.restore.velero.io/command |
+ A Velero exec restore hook that runs a script to check if the database file exists, and restores only if it exists. Then, the script deletes the file after the operation is complete. | +
+
+After clicking this button, the bundle will be immediately available under the Troubleshoot tab in the Vendor Portal team account associated with this customer.
+
+For more information on how your customer can use this feature, see [Generating Support Bundles from the Admin Console](/enterprise/troubleshooting-an-app).
+
+### How to Enable Direct Bundle Uploads
+
+Direct bundle uploads are disabled by default. To enable this feature for your customer:
+
+1. Log in to the Vendor Portal and navigate to your customer's **Manage Customer** page.
+1. Under the **License options** section, make sure your customer has **KOTS Install Enabled** checked, and then check the **Support Bundle Upload Enabled (Beta)** option.
+
+
+ [View a larger version of this image](/images/configure-direct-support-bundle-upload.png)
+1. Click **Save**.
+
+### Limitations
+
+- You will not receive a notification when a customer sends a support bundle to the Vendor Portal. To avoid overlooking these uploads, activate this feature only if there is a reliable escalation process already in place for the customer license.
+- This feature only supports online KOTS installations. If enabled, but installed in air gap mode, the upload button will not appear.
+- There is a 500mb limit for support bundles uploaded directly via the Admin Console.
diff --git a/docs/reference/support-host-support-bundles.md b/docs/reference/support-host-support-bundles.md
new file mode 100644
index 0000000000..855e53b4dc
--- /dev/null
+++ b/docs/reference/support-host-support-bundles.md
@@ -0,0 +1,75 @@
+import GenerateBundleHost from "../partials/support-bundles/_generate-bundle-host.mdx"
+
+# Generating Host Bundles for kURL
+
+This topic describes how to configure a host support bundle spec for Replicated kURL installations. For information about generating host support bundles for Replicated Embedded Cluster installations, see [Generating Host Bundles for Embedded Cluster](/vendor/support-bundle-embedded).
+
+## Overview
+
+Host support bundles can be used to collect information directly from the host where a kURL cluster is running, such as CPU, memory, available block devices, and the operating system. Host support bundles can also be used for testing network connectivity and gathering the output of provided commands.
+
+Host bundles for kURL are useful when:
+- The kURL cluster is offline
+- The kURL installer failed before the control plane was initialized
+- The Admin Console is not working
+- You want to debug host-specific performance and configuration problems even when the cluster is running
+
+You can create a YAML spec to allow users to generate host support bundles for kURL installations. For information, see [Create a Host Support Bundle Spec](#create-a-host-support-bundle-spec) below.
+
+Replicated also provides a default support bundle spec to collect host-level information for installations with the Embedded Cluster installer. For more information, see [Generating Host Bundles for Embedded Cluster](/vendor/support-bundle-embedded).
+
+## Create a Host Support Bundle Spec
+
+To allow users to generate host support bundles for kURL installations, create a host support bundle spec in a YAML manifest that is separate from your application release and then share the file with customers to run on their hosts. This spec is separate from your application release because host collectors and analyzers are intended to run directly on the host and not with Replicated KOTS. If KOTS runs host collectors, the collectors are unlikely to produce the desired results because they run in the context of the kotsadm Pod.
+
+To configure a host support bundle spec for kURL:
+
+1. Create a SupportBundle custom resource manifest file (`kind: SupportBundle`).
+
+1. Configure all of your host collectors and analyzers in one manifest file. You can use the following resources to help create your specification:
+
+ - Access sample specifications in the the Replicated troubleshoot-specs repository, which provides specifications for supporting your customers. See [troubleshoot-specs/host](https://github.com/replicatedhq/troubleshoot-specs/tree/main/host) in GitHub.
+
+ - View a list and details of the available host collectors and analyzers. See [All Host Collectors and Analyzers](https://troubleshoot.sh/docs/host-collect-analyze/all/) in the Troubleshoot documentation.
+
+ **Example:**
+
+ The following example shows host collectors and analyzers for the number of CPUs and the amount of memory.
+
+ ```yaml
+ apiVersion: troubleshoot.sh/v1beta2
+ kind: SupportBundle
+ metadata:
+ name: host-collectors
+ spec:
+ hostCollectors:
+ - cpu: {}
+ - memory: {}
+ hostAnalyzers:
+ - cpu:
+ checkName: "Number of CPUs"
+ outcomes:
+ - fail:
+ when: "count < 2"
+ message: At least 2 CPU cores are required, and 4 CPU cores are recommended.
+ - pass:
+ message: This server has at least 4 CPU cores.
+ - memory:
+ checkName: "Amount of Memory"
+ outcomes:
+ - fail:
+ when: "< 4G"
+ message: At least 4G of memory is required, and 8G is recommended.
+ - pass:
+ message: The system has at least 8G of memory.
+ ```
+
+1. Share the file with your customers to run on their hosts.
+
+:::important
+Do not store support bundles on public shares, as they may still contain information that could be used to infer private data about the installation, even if some values are redacted.
+:::
+
+## Generate a Host Bundle for kURL
+
+
+
+ [View a larger version of this image](/images/support-bundle-analyze.png)
+
+1. (Optional) If the support bundle relates to an open support issue, select the support issue from the dropdown to share the bundle with Replicated.
+
+1. Click **Upload support bundle**.
+
+ The **Support bundle analysis** page opens. The **Support bundle analysis** page includes information about the bundle, any available instance reporting data from the point in time when the bundle was collected, an analysis overview that can be filtered to show errors and warnings, and a file inspector.
+
+ 
+
+ [View a larger version of this image](/images/support-bundle-analysis-overview.png)
+
+1. On the **File inspector** tab, select any files from the directory tree to inspect the details of any files included in the support bundle, such as log files.
+
+1. (Optional) Click **Download bundle** to download the bundle. This can be helpful if you want to access the bundle from another system or if other team members want to access the bundle and use other tools to examine the files.
+
+1. (Optional) Navigate back to the [**Troubleshoot**](https://vendor.replicated.com/troubleshoot) page and click **Create cluster** to provision a cluster with Replicated Compatibility Matrix. This can be helpful for creating customer-representative environments for troubleshooting. For more information about creating clusters with Compatibility Matrix, see [Using Compatibility Matrix](testing-how-to).
+
+
+
+ [View a larger version of this image](/images/cmx-cluster-configuration.png)
+
+1. If you cannot resolve your customer's issue and need to submit a support request, go to the [**Support**](https://vendor.replicated.com/) page and click **Open a support request**. For more information, see [Submitting a Support Request](support-submit-request).
+
+ :::note
+ The **Share with Replicated** button on the support bundle analysis page does _not_ open a support request. You might be directed to use the **Share with Replicated** option when you are already interacting with a Replicated team member.
+ :::
+
+ 
+
+ [View larger version of this image](/images/support.png)
diff --git a/docs/reference/support-modular-support-bundle-specs.md b/docs/reference/support-modular-support-bundle-specs.md
new file mode 100644
index 0000000000..e375de64dc
--- /dev/null
+++ b/docs/reference/support-modular-support-bundle-specs.md
@@ -0,0 +1,75 @@
+# About Creating Modular Support Bundle Specs
+
+This topic describes how to use a modular approach to creating support bundle specs.
+
+## Overview
+
+Support bundle specifications can be designed using a modular approach. This refers to creating multiple different specs that are scoped to individual components or microservices, rather than creating a single, large spec. For example, for applications that are deployed as multiple Helm charts, vendors can create a separate support bundle spec in the `templates` directory in the parent chart as well as in each subchart.
+
+This modular approach helps teams develop specs that are easier to maintain and helps teams to avoid merge conflicts that are more likely to occur when making to changes to a large spec. When generating support bundles for an application that includes multiple modular specs, the specs are merged so that only one support bundle archive is generated.
+
+## Example: Support Bundle Specifications by Component {#component}
+
+Using a modular approach for an application that ships MySQL, NGINX, and Redis, your team can add collectors and analyzers in using a separate support bundle specification for each component.
+
+`manifests/nginx/troubleshoot.yaml`
+
+This collector and analyzer checks compliance for the minimum number of replicas for the NGINX component:
+
+ ```yaml
+apiVersion: troubleshoot.sh/v1beta2
+kind: SupportBundle
+metadata:
+ name: nginx
+spec:
+ collectors:
+ - logs:
+ selector:
+ - app=nginx
+ analyzers:
+ - deploymentStatus:
+ name: nginx
+ outcomes:
+ - fail:
+ when: replicas < 2
+ ```
+
+`manifests/mysql/troubleshoot.yaml`
+
+This collector and analyzer checks compliance for the minimum version of the MySQL component:
+
+ ```yaml
+apiVersion: troubleshoot.sh/v1beta2
+kind: SupportBundle
+metadata:
+ name: mysql
+spec:
+ collectors:
+ - mysql:
+ uri: 'dbuser:**REDACTED**@tcp(db-host)/db'
+ analyzers:
+ - mysql:
+ checkName: Must be version 8.x or later
+ outcomes:
+ - fail:
+ when: version < 8.x
+```
+
+`manifests/redis/troubleshoot.yaml`
+
+This collector and analyzer checks that the Redis server is responding:
+
+```yaml
+apiVersion: troubleshoot.sh/v1beta2
+kind: SupportBundle
+metadata:
+ name: redis
+spec:
+ collectors:
+ - redis:
+ collectorName: redis
+ uri: rediss://default:password@hostname:6379
+```
+
+A single support bundle archive can be generated from a combination of these manifests using the `kubectl support-bundle --load-cluster-specs` command.
+For more information and additional options, see [Generating Support Bundles](support-bundle-generating).
\ No newline at end of file
diff --git a/docs/reference/support-online-support-bundle-specs.md b/docs/reference/support-online-support-bundle-specs.md
new file mode 100644
index 0000000000..bc446c58c7
--- /dev/null
+++ b/docs/reference/support-online-support-bundle-specs.md
@@ -0,0 +1,67 @@
+# Making Support Bundle Specs Available Online
+
+This topic describes how to make your application's support bundle specs available online as well as how to link to online specs.
+
+## Overview
+
+You can make the definition of one or more support bundle specs available online in a source repository and link to it from the specs in the cluster. This approach lets you update collectors and analyzers outside of the application release and notify customers of potential problems and fixes in between application updates.
+
+The schema supports a `uri:` field that, when set, causes the support bundle generation to use the online specification. If the URI is unreachable or unparseable, any collectors or analyzers in the specification are used as a fallback.
+
+You update collectors and analyzers in the online specification to manage bug fixes. When a customer generates a support bundle, the online specification can detect those potential problems in the cluster and let them know know how to fix it. Without the URI link option, you must wait for the next time your customers update their applications or Kubernetes versions to get notified of potential problems. The URI link option is particularly useful for customers that do not update their application routinely.
+
+If you are using a modular approach to designing support bundles, you can use multiple online specs. Each specification supports one URI link. For more information about modular specs, see [About Creating Modular Support Bundle Specs](support-modular-support-bundle-specs).
+
+## Example: URI Linking to a Source Repository
+
+This example shows how Replicated could set up a URI link for one of its own components. You can follow a similar process to link to your own online repository for your support bundles.
+
+Replicated kURL includes an EKCO add-on for maintenance on embedded clusters, such as automating certificate rotation or data migration tasks. Replicated can ship this component with a support bundle manifest that warns users if they do not have this add-on installed or if it is not running in the cluster.
+
+**Example: Release v1.0.0**
+
+```yaml
+apiVersion: troubleshoot.sh/v1beta2
+kind: SupportBundle
+metadata:
+ name: ekco
+spec:
+ collectors:
+ analyzers:
+ - deploymentStatus:
+ checkName: Check EKCO is operational
+ name: ekc-operator
+ namespace: kurl
+ outcomes:
+ - fail:
+ when: absent
+ message: EKCO is not installed - please add the EKCO component to your kURL spec and re-run the installer script
+ - fail:
+ when: "< 1"
+ message: EKCO does not have any ready replicas
+ - pass:
+ message: EKCO has at least 1 replica
+```
+
+If a bug is discovered at any time after the release of the specification above, Replicated can write an analyzer for it in an online specification. By adding a URI link to the online specification, the support bundle uses the assets hosted in the online repository, which is kept current.
+
+The `uri` field is added to the specification as a raw file link. Replicated hosts the online specification on [GitHub](https://github.com/replicatedhq/troubleshoot-specs/blob/main/in-cluster/default.yaml).
+
+**Example: Release v1.1.0**
+
+```yaml
+apiVersion: troubleshoot.sh/v1beta2
+kind: SupportBundle
+metadata:
+ name: ekco
+spec:
+ uri: https://raw.githubusercontent.com/replicatedhq/troubleshoot-specs/main/in-cluster/default.yaml
+ collectors: [...]
+ analyzers: [...]
+```
+
+Using the `uri:` property, the support bundle gets the latest online specification if it can, or falls back to the collectors and analyzers listed in the specification that is in the cluster.
+
+Note that because the release version 1.0.0 did not contain the URI, Replicated would have to wait until existing users upgrade a cluster before getting the benefit of the new analyzer. Then, going forward, those users get any future online analyzers without having to upgrade. New users who install the version containing the URI as their initial installation automatically get any online analyzers when they generate a support bundle.
+
+For more information about the URI, see [Troubleshoot schema supports a `uri://` field](https://troubleshoot.sh/docs/support-bundle/supportbundle/#uri) in the Troubleshoot documentation. For a complete example, see [Debugging Kubernetes: Enhancements to Troubleshoot](https://www.replicated.com/blog/debugging-kubernetes-enhancements-to-troubleshoot/#Using-online-specs-for-support-bundles) in The Replicated Blog.
diff --git a/docs/reference/support-submit-request.md b/docs/reference/support-submit-request.md
new file mode 100644
index 0000000000..6097b903ac
--- /dev/null
+++ b/docs/reference/support-submit-request.md
@@ -0,0 +1,30 @@
+# Submitting a Support Request
+
+You can submit a support request and a support bundle using the Replicated Vendor Portal. Uploading a support bundle is secure and helps the Replicated support team troubleshoot your application faster. Severity 1 issues are resolved three times faster when you submit a support bundle with your support request.
+
+### Prerequisites
+
+The following prerequisites must be met to submit support requests:
+
+* Your Vendor Portal account must be configured for access to support before you can submit support requests. Contact your administrator to ensure that you are added to the correct team.
+
+* Your team must have a replicated-collab repository configured. If you are a team administrator and need information about getting a collab repository set up and adding users, see [Adding Users to the Collab Repository](team-management-github-username#add).
+
+
+### Submit a Support Request
+
+To submit a support request:
+
+1. From the [Vendor Portal](https://vendor.replicated.com), click **Support > Submit a Support Request** or go directly to the [Support page](https://vendor.replicated.com/support).
+
+1. In section 1 of the Support Request form, complete the fields with information about your issue.
+
+1. In section 2, do _one_ of the following actions:
+ - Use your pre-selected support bundle or select a different bundle in the pick list
+ - Select **Upload and attach a new support bundle** and attach a bundle from your file browser
+
+1. Click **Submit Support Request**. You receive a link to your support issue, where you can interact with the support team.
+
+ :::note
+ Click **Back** to exit without submitting a support request.
+ :::
diff --git a/docs/reference/team-management-github-username.mdx b/docs/reference/team-management-github-username.mdx
new file mode 100644
index 0000000000..def77ea8dd
--- /dev/null
+++ b/docs/reference/team-management-github-username.mdx
@@ -0,0 +1,139 @@
+import CollabRepoAbout from "../partials/collab-repo/_collab-repo-about.mdx"
+import CollabRbacResourcesImportant from "../partials/collab-repo/_collab-rbac-resources-important.mdx"
+import CollabExistingUser from "../partials/collab-repo/_collab-existing-user.mdx"
+
+
+# Managing Collab Repository Access
+
+This topic describes how to add users to the Replicated collab GitHub repository automatically through the Replicated Vendor Portal. It also includes information about managing user roles in this repository using Vendor Portal role-based access control (RBAC) policies.
+
+## Overview {#overview}
+
+
+
+ The Vendor Portal automatically adds your GitHub username to the collab repository and assigns it the Admin role. You receive an email with details about the collab repository when you are added.
+
+1. Follow the collab repository link from the email that you receive to log in to your GitHub account and access the repository.
+
+1. (Recommended) Manually remove any users in the collab repository that were previously added through GitHub.
+
+ :::note
+ | Vendor Portal Role | +GitHub collab Role | +Description | +
|---|---|---|
| Admin | +Admin | +Members assigned the default Admin role in the Vendor Portal are assigned the GitHub Admin role in the collab repository. |
+
| Support Engineer | +Triage | +Members assigned the custom Support Engineer role in the Vendor Portal are assigned the GitHub Triage role in the collab repository. For information about creating a custom Support Engineer policy in the Vendor Portal, see Support Engineer in Configuring RBAC Policies. For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
+
| Read Only | +Read | +Members assigned the default Read Only role in the Vendor Portal are assigned the GitHub Read role in the collab repository. | +
| Sales | +N/A | +Users assigned the custom Sales role in the Vendor Portal do not have access to the collab repository. For information about creating a custom Sales policy in the Vendor Portal, see Sales in Configuring RBAC Policies. For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. |
+
Custom policies with **/admin under allowed: |
+ Admin | +
+ By default, members assigned to a custom RBAC policy that specifies For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. + |
+
Custom policies without **/admin under allowed: |
+ Read Only | +
+ By default, members assigned to any custom RBAC policies that do not specify For information about editing custom RBAC policies to change this default GitHub role, see About Changing the Default GitHub Role below. + |
+
+
+ [View a larger version of this image](/images/vendor-portal-account-settings.png)
+
+1. In the **Two-Factor Authentication** pane, click **Turn on two-factor authentication**.
+
+
+
+ [View a larger version of this image](/images/vendor-portal-password-2fa.png)
+
+1. In the **Confirm password** dialog, enter your Vendor Portal account password. Click **Confirm password**.
+
+1. Scan the QR code that displays using a supported two-factor authentication application on your mobile device, such as Google Authenticator. Alternatively, click **Use this text code** in the Vendor Portal to generate an alphanumeric code that you enter in the mobile application.
+
+
+
+ [View a larger version of this image](/images/vendor-portal-scan-qr.png)
+
+ Your mobile application displays an authentication code.
+
+1. Enter the authentication code in the Vendor Portal.
+
+ Two-factor authentication is enabled and a list of recovery codes is displayed at the bottom of the **Two-Factor Authentication** pane.
+
+1. Save the recovery codes in a secure location. These codes can be used any time (one time per code), if you lose your mobile device.
+
+1. Log out of your account, then log back in to test that it is enabled. You are prompted to enter a one-time code generated by the application on your mobile device.
+
+
+## Disable 2FA on Individual Accounts
+
+To disable two-factor authentication on your individual account:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), click **Account Settings** from the dropdown list in the upper right corner of the screen.
+
+
+
+ [View a larger version of this image](/images/vendor-portal-account-settings.png)
+
+1. In the **Two-Factor Authentication** pane, click **Turn off two-factor authentication**.
+
+1. In the **Confirm password** dialog, enter your Vendor Portal account password. Click **Confirm password**.
+
+## Enable or Disable 2FA for a Team
+
+As an administrator, you can enable and disable 2FA for teams. You must first enable 2FA on your individual account before you can enable 2FA for teams. After you enable 2FA for your team, team members can enable 2FA on their individual accounts.
+
+To enable or disable 2FA for a team:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), select the **Team** tab, then select **Multifactor Auth**.
+
+
+
+ [View a larger image](/images/team-2fa-auth.png)
+
+1. On the **Multifactor Authentication** page, do one of the following with the **Require Two-Factor Authentication for all Username/Password authenticating users** toggle:
+
+ - Turn on the toggle to enable 2FA
+ - Turn off the toggle to disable 2FA
+
+1. Click **Save changes**.
+
+
diff --git a/docs/reference/team-management.md b/docs/reference/team-management.md
new file mode 100644
index 0000000000..29c805b94b
--- /dev/null
+++ b/docs/reference/team-management.md
@@ -0,0 +1,126 @@
+import CollabRepoAbout from "../partials/collab-repo/_collab-repo-about.mdx"
+import CollabRbacImportant from "../partials/collab-repo/_collab-rbac-important.mdx"
+
+# Managing Team Members
+
+This topic describes how to manage team members in the Replicated Vendor Portal, such as inviting and removing members, and editing permissions. For information about managing user access to the Replicated collab repository in GitHub, see [Managing Collab Repository Access](team-management-github-username).
+
+## Viewing Team Members
+The [Team](https://vendor.replicated.com/team/members) page provides a list of all accounts currently associated with or invited to your team. Each row contains information about the user, including their two-factor authentication (2FA) status and role-based access control (RBAC) role, and lets administrators take additional actions, such as remove, re-invite, and edit permissions.
+
+
+
+[View a larger image](/images/teams-view.png)
+
+All users, including read-only, can see the name of the RBAC role assigned to each team member. When SAML authentication is enabled, users with the built-in read-only policy cannot see the RBAC role assigned to team members.
+
+## Invite Members
+By default, team administrators can invite more team members to collaborate. Invited users receive an email to activate their account. The activation link in the email is unique to the invited user. Following the activation link in the email also ensures that the invited user joins the team from which the invitation originated.
+
+:::note
+Teams that have enforced SAML-only authentication do not use the email invitation flow described in this procedure. These teams and their users must log in through their SAML provider.
+:::
+
+To invite a new team member:
+
+1. From the [Team Members](https://vendor.replicated.com/team/members) page, click **Invite team member**.
+
+ The Invite team member dialog opens.
+
+
+
+ [Invite team member dialog](/images/teams-invite-member.png)
+
+1. Enter the email address of the member.
+
+1. In the **Permissions** field, assign an RBAC policy from the dropdown list.
+
+
+
+## Enable Users to Auto-join Your Team
+By default, users must be invited to your team. Team administrators can use the auto-join feature to allow users from the same email domain to join their team automatically. This applies to users registering with an email, or with Google authentication if it is enabled for the team. The auto-join feature does not apply to SAML authentication because SAML users log in using their SAML provider's application portal instead of the Vendor Portal.
+
+To add, edit, or delete custom RBAC policies, see [Configuring RBAC Policies](team-management-rbac-configuring).
+
+To enable users to auto-join your team:
+
+1. From the Team Members page, click **Auto-join** from the left navigation.
+1. Enable the **Allow all users from my domain to be added to my team** toggle.
+
+
+
+ [View a larger image](/images/teams-auto-join.png)
+
+1. For **Default RBAC policy level for new accounts**, you can use the default Read Only policy or select another policy from the list. This RBAC policy is applied to all users who join the team with the auto-join feature.
+
+
+
+[View a larger version of this image](/images/airgap-telemetry.png)
+
+All support bundles uploaded to the Vendor Portal from air gap customers contributes to a comprehensive dataset, providing parity in the telemetry for air gap and online instances. Replicated recommends that you collect support bundles from air gap customers regularly (monthly or quarterly) to improve the completeness of the dataset. The Vendor Portal handles any overlapping event archives idempotently, ensuring data integrity.
+
+## Requirement
+
+Air gap telemetry has the following requirements:
+
+* To collect telemetry from air gap instances, one of the following must be installed in the cluster where the instance is running:
+
+ * The Replicated SDK installed in air gap mode. See [Installing the SDK in Air Gap Environments](/vendor/replicated-sdk-airgap).
+
+ * KOTS v1.92.1 or later
+
+ :::note
+ When both the Replicated SDK and KOTS v1.92.1 or later are installed in the cluster (such as when a Helm chart that includes the SDK is installed by KOTS), both collect and store instance telemetry in their own dedicated secret, subject to the size limitation noted below. In the case of any overlapping data points, the Vendor Portal will report these data points chronologically based on their timestamp.
+ :::
+
+* To collect custom metrics from air gap instances, the Replicated SDK must installed in the cluster in air gap mode. See [Installing the SDK in Air Gap Environments](/vendor/replicated-sdk-airgap).
+
+ For more information about custom metrics, see [Configuring Custom Metrics](https://docs.replicated.com/vendor/custom-metrics).
+
+Replicated strongly recommends that all applications include the Replicated SDK because it enables access to both standard instance telemetry and custom metrics for air gap instances.
+
+## Limitation
+
+Telemetry data is capped at 4,000 events or 1MB per Secret; whichever limit is reached first.
+
+When a limit is reached, the oldest events are purged until the payload is within the limit. For optimal use, consider collecting support bundles regularly (monthly or quarterly) from air gap customers.
+
+## Collect and View Air Gap Telemetry
+
+To collect telemetry from air gap instances:
+
+1. Ask your customer to collect a support bundle. See [Generating Support Bundles](/vendor/support-bundle-generating).
+
+1. After receiving the support bundle from your customer, go to the Vendor Portal **Customers**, **Customer Reporting**, or **Instance Details** page and upload the support bundle:
+
+ 
+
+ The telemetry collected from the support bundle appears in the instance data shortly. Allow a few minutes for all data to be processed.
diff --git a/docs/reference/template-functions-about.mdx b/docs/reference/template-functions-about.mdx
deleted file mode 100644
index cd3a65d15d..0000000000
--- a/docs/reference/template-functions-about.mdx
+++ /dev/null
@@ -1,158 +0,0 @@
-import UseCases from "../partials/template-functions/_use-cases.mdx"
-
-# About Template Functions
-
-This topic describes Replicated KOTS template functions, including information about use cases, template function contexts, syntax.
-
-## Overview
-
-For Kubernetes manifest files for applications deployed by Replicated KOTS, Replicated provides a set of custom template functions based on the Go text/template library.
-
-
-
-[View a larger version of this image](/images/conditional-item-true.png)
-
-Alternatively, if either `Option One` or `Boolean Example` is not selected, then the conditional statement evaluates to false and the `Conditional Item` field is not displayed:
-
-
-
-[View a larger version of this image](/images/conditional-item-false-option-two.png)
-
-
-
-[View a larger version of this image](/images/conditional-item-false-boolean.png)
-
-## Conditional Statement Examples
-
-This section includes examples of using KOTS template functions to construct conditional statements. Conditional statements can be used with KOTS template functions to render different values depending on a given condition.
-
-### If-Else Statements
-
-A common use case for if-else statements with KOTS template functions is to set values for resources or objects deployed by your application, such as custom annotations or service types, based on user-specific data.
-
-This section includes examples of both single line and multi-line if-else statements. Using multi-line formatting can be useful to improve the readability of YAML files when longer or more complex if-else statements are needed.
-
-Multi-line if-else statements can be constructed using YAML block scalars and block chomping characters to ensure the rendered result is valid YAML. A _folded_ block scalar style is denoted using the greater than (`>`) character. With the folded style, single line breaks in the string are treated as a space. Additionally, the block chomping minus (`-`) character is used to remove all the line breaks at the end of a string. For more information about working with these characters, see [Block Style Productions](https://yaml.org/spec/1.2.2/#chapter-8-block-style-productions) in the YAML documentation.
-
-:::note
-For Helm-based applications that need to use more complex or nested if-else statements, you can alternatively use templating within your Helm chart `templates` rather than in the KOTS HelmChart custom resource. For more information, see [If/Else](https://helm.sh/docs/chart_template_guide/control_structures/#ifelse) in the Helm documentation.
-:::
-
-#### Single Line
-
-The following example shows if-else statements used in the KOTS HelmChart custom resource `values` field to render different values depending on if the user selects a load balancer or an ingress controller as the ingress type for the application. This example uses the KOTS [ConfigOptionEquals](/reference/template-functions-config-context#configoptionequals) template function to return a boolean that evaluates to true if the configuration option value is equal to the supplied value.
-
-```yaml
-# KOTS HelmChart custom resource
-apiVersion: kots.io/v1beta2
-kind: HelmChart
-metadata:
- name: my-app
-spec:
- chart:
- name: my-app
- chartVersion: 0.23.0
- values:
- services:
- my-service:
- enabled: true
- appName: ["my-app"]
- # Render the service type based on the user's selection
- # '{{repl ...}}' syntax is used for `type` to improve readability of the if-else statement and render a string
- type: '{{repl if ConfigOptionEquals "ingress_type" "load_balancer" }}LoadBalancer{{repl else }}ClusterIP{{repl end }}'
- ports:
- http:
- enabled: true
- # Render the HTTP port for the service depending on the user's selection
- # repl{{ ... }} syntax is used for `port` to render an integer value
- port: repl{{ if ConfigOptionEquals "ingress_type" "load_balancer" }}repl{{ ConfigOption "load_balancer_port" }}repl{{ else }}8081repl{{ end }}
- protocol: HTTP
- targetPort: 8081
-```
-
-#### Multi-Line in KOTS HelmChart Values
-
-The following example uses a multi-line if-else statement in the KOTS HelmChart custom resource to render the path to the Replicated SDK image depending on if the user pushed images to a local private registry.
-
-This example uses the following KOTS template functions:
-* [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry) to return true if the environment is configured to rewrite images to a local registry
-* [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost) to return the local registry host configured by the user
-* [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) to return the local registry namespace configured by the user
-
-:::note
-This example uses the `{{repl ...}}` syntax rather than the `repl{{ ... }}` syntax to improve readability in the YAML file. However, both syntaxes are supported for this use case. For more information, see [Syntax](/reference/template-functions-about#syntax) in _About Template Functions_.
-:::
-
-```yaml
-# KOTS HelmChart custom resource
-apiVersion: kots.io/v1beta2
-kind: HelmChart
-metadata:
- name: samplechart
-spec:
- values:
- images:
- replicated-sdk: >-
- {{repl if HasLocalRegistry -}}
- {{repl LocalRegistryHost }}/{{repl LocalRegistryNamespace }}/replicated-sdk:1.0.0-beta.29
- {{repl else -}}
- docker.io/replicated/replicated-sdk:1.0.0-beta.29
- {{repl end}}
-```
-
-Given the example above, if the user is _not_ using a local registry, then the `replicated-sdk` value in the Helm chart is set to the location of the image on the default docker registry, as shown below:
-
-```yaml
-# Helm chart values file
-
-images:
- replicated-sdk: 'docker.io/replicated/replicated-sdk:1.0.0-beta.29'
-```
-
-#### Multi-Line in Secret Object
-
-The following example uses multi-line if-else statements in a Secret object deployed by KOTS to conditionally set the database hostname, port, username, and password depending on if the customer uses the database embedded with the application or brings their own external database.
-
-This example uses the following KOTS template functions:
-* [ConfigOptionEquals](/reference/template-functions-config-context#configoptionequals) to return a boolean that evaluates to true if the configuration option value is equal to the supplied value
-* [ConfigOption](/reference/template-functions-config-context#configoption) to return the user-supplied value for the specified configuration option
-* [Base64Encode](/reference/template-functions-static-context#base64encode) to encode the string with base64
-
-:::note
-This example uses the `{{repl ...}}` syntax rather than the `repl{{ ... }}` syntax to improve readability in the YAML file. However, both syntaxes are supported for this use case. For more information, see [Syntax](/reference/template-functions-about#syntax) in _About Template Functions_.
-:::
-
-```yaml
-# Postgres Secret
-apiVersion: v1
-kind: Secret
-metadata:
- name: postgres
-data:
- # Render the value for the database hostname depending on if an embedded or
- # external db is used.
- # Also, base64 encode the rendered value.
- DB_HOST: >-
- {{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
- {{repl Base64Encode "postgres" }}
- {{repl else -}}
- {{repl ConfigOption "external_postgres_host" | Base64Encode }}
- {{repl end}}
- DB_PORT: >-
- {{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
- {{repl Base64Encode "5432" }}
- {{repl else -}}
- {{repl ConfigOption "external_postgres_port" | Base64Encode }}
- {{repl end}}
- DB_USER: >-
- {{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
- {{repl Base64Encode "postgres" }}
- {{repl else -}}
- {{repl ConfigOption "external_postgres_user" | Base64Encode }}
- {{repl end}}
- DB_PASSWORD: >-
- {{repl if ConfigOptionEquals "postgres_type" "embedded_postgres" -}}
- {{repl ConfigOption "embedded_postgres_password" | Base64Encode }}
- {{repl else -}}
- {{repl ConfigOption "external_postgres_password" | Base64Encode }}
- {{repl end}}
-```
-
-### Ternary Operators
-
-Ternary operators are useful for templating strings where certain values within the string must be rendered differently depending on a given condition. Compared to if-else statements, ternary operators are useful when a small portion of a string needs to be conditionally rendered, as opposed to rendering different values based on a conditional statement. For example, a common use case for ternary operators is to template the path to an image repository based on user-supplied values.
-
-The following example uses ternary operators to render the registry and repository for a private nginx image depending on if a local image regsitry is used. This example uses the following KOTS template functions:
-* [HasLocalRegistry](/reference/template-functions-config-context#haslocalregistry) to return true if the environment is configured to rewrite images to a local registry
-* [LocalRegistryHost](/reference/template-functions-config-context#localregistryhost) to return the local registry host configured by the user
-* [LocalRegistryNamespace](/reference/template-functions-config-context#localregistrynamespace) to return the local registry namespace configured by the user
-
-```yaml
-# KOTS HelmChart custom resource
-apiVersion: kots.io/v1beta2
-kind: HelmChart
-metadata:
- name: samplechart
-spec:
- values:
- image:
- # If a local registry is configured, use the local registry host.
- # Otherwise, use proxy.replicated.com
- registry: repl{{ HasLocalRegistry | ternary LocalRegistryHost "proxy.replicated.com" }}
- # If a local registry is configured, use the local registry's namespace.
- # Otherwise, use proxy/my-app/quay.io/my-org
- repository: repl{{ HasLocalRegistry | ternary LocalRegistryNamespace "proxy/my-app/quay.io/my-org" }}/nginx
- tag: v1.0.1
-```
-
-## Formatting Examples
-
-This section includes examples of how to format the rendered output of KOTS template functions.
-
-In addition to the examples in this section, KOTS template functions in the Static context include several options for formatting values, such as converting strings to upper or lower case and trimming leading and trailing space characters. For more information, see [Static Context](/reference/template-functions-static-context).
-
-### Indentation
-
-When using template functions within nested YAML, it is important that the rendered template functions are indented correctly so that the YAML renders. A common use case for adding indentation to KOTS template functions is when templating annotations in the metadata of resources or objects deployed by your application based on user-supplied values.
-
-The [nindent](https://masterminds.github.io/sprig/strings.html) function can be used to prepend a new line to the beginning of the string and indent the string by a specified number of spaces.
-
-#### Indent Templated Helm Chart Values
-
-The following example shows templating a Helm chart value that sets annotations for an Ingress object. This example uses the KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function to return user-supplied annotations from the Admin Console **Config** page. It also uses [nindent](https://masterminds.github.io/sprig/strings.html) to indent the rendered value ten spaces.
-
-```yaml
-# KOTS HelmChart custom resource
-
-apiVersion: kots.io/v1beta2
-kind: HelmChart
-metadata:
- name: myapp
-spec:
- values:
- services:
- myservice:
- annotations: repl{{ ConfigOption "additional_annotations" | nindent 10 }}
-```
-
-#### Indent Templated Annotations in Manifest Files
-
-The following example shows templating annotations for an Ingress object. This example uses the KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function to return user-supplied annotations from the Admin Console **Config** page. It also uses [nindent](https://masterminds.github.io/sprig/strings.html) to indent the rendered value four spaces.
-
-```yaml
-apiVersion: extensions/v1beta1
-kind: Ingress
-metadata:
- name: example-ingress
- annotations:
- kots.io/placeholder: |-
- repl{{ ConfigOption "ingress_annotations" | nindent 4 }}
-```
-
-### Render Quoted Values
-
-To wrap a rendered value in quotes, you can pipe the result from KOTS template functions with the `repl{{ ... }}` syntax into quotes using `| quote`. Or, you can use the `'{{repl ... }}'` syntax instead.
-
-One use case for quoted values in YAML is when indicator characters are included in values. In YAML, indicator characters (`-`, `?`, `:`) have special semantics and must be escaped if used in values. For more information, see [Indicator Charactors](https://yaml.org/spec/1.2.2/#53-indicator-characters) in the YAML documentation.
-
-#### Example with `'{{repl ... }}'` Syntax
-
-```yaml
-customTag: '{{repl ConfigOption "tag" }}'
-```
-#### Example with `| quote`
-
-```yaml
-customTag: repl{{ ConfigOption "tag" | quote }}
-```
-
-The result for both examples is:
-
-```yaml
-customTag: 'key: value'
-```
-
-## Variables Example
-
-This section includes an example of using variables with KOTS template functions. For more information, see [Variables](https://pkg.go.dev/text/template#hdr-Variables) in the Go documentation.
-
-### Using Variables to Generate TLS Certificates in JSON
-
-You can use the Sprig [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions with KOTS template functions to generate certificate authorities (CAs) and signed certificates in JSON. One use case for this is to generate default CAs, certificates, and keys that users can override with their own values on the Admin Console **Config** page.
-
-The Sprig [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions require the subject's common name and the certificate's validity duration in days. The `genSignedCert` function also requires the CA that will sign the certificate. You can use variables and KOTS template functions to provide the necessary parameters when calling these functions.
-
-The following example shows how to use variables and KOTS template functions in the `default` property of a [`hidden`](/reference/custom-resource-config#hidden) item to pass parameters to the `genCA` and `genSignedCert` functions and generate a CA, certificate, and key. This example uses a `hidden` item (which is an item that is not displayed on the **Config** page) to generate the certificate chain because variables used in the KOTS Config custom resource can only be accessed from the same item where they were declared. For this reason, `hidden` items can be useful for evaluating complex templates.
-
-This example uses the following:
-* KOTS [ConfigOption](/reference/template-functions-config-context#configoption) template function to render the user-supplied value for the ingress hostname. This is passed as a parameter to the [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions
-* Sprig [genCA](https://masterminds.github.io/sprig/crypto.html) and [genSignedCert](https://masterminds.github.io/sprig/crypto.html) functions to generate a CA and a certificate signed by the CA
-* Sprig [dict](https://masterminds.github.io/sprig/dicts.html), [set](https://masterminds.github.io/sprig/dicts.html), and [dig](https://masterminds.github.io/sprig/dicts.html) dictionary functions to create a dictionary with entries for both the CA and the certificate, then traverse the dictionary to return the values of the CA, certificate, and key.
-* [toJson](https://masterminds.github.io/sprig/defaults.html) and [fromJson](https://masterminds.github.io/sprig/defaults.html) Sprig functions to encode the CA and certificate into a JSON string, then decode the JSON for the purpose of displaying the values on the **Config** page as defaults
-
-:::important
-Default values are treated as ephemeral. The following certificate chain is recalculated each time the application configuration is modified. Before using this example with your application, be sure that your application can handle updating these parameters dynamically.
-:::
-
-```yaml
-apiVersion: kots.io/v1beta1
-kind: Config
-metadata:
- name: config-sample
-spec:
- groups:
- - name: example_settings
- title: My Example Config
- items:
- - name: ingress_hostname
- title: Ingress Hostname
- help_text: Enter a DNS hostname to use as the cert's CN.
- type: text
- - name: tls_json
- title: TLS JSON
- type: textarea
- hidden: true
- default: |-
- repl{{ $ca := genCA (ConfigOption "ingress_hostname") 365 }}
- repl{{ $tls := dict "ca" $ca }}
- repl{{ $cert := genSignedCert (ConfigOption "ingress_hostname") (list ) (list (ConfigOption "ingress_hostname")) 365 $ca }}
- repl{{ $_ := set $tls "cert" $cert }}
- repl{{ toJson $tls }}
- - name: tls_ca
- title: Signing Authority
- type: textarea
- default: repl{{ fromJson (ConfigOption "tls_json") | dig "ca" "Cert" "" }}
- - name: tls_cert
- title: TLS Cert
- type: textarea
- default: repl{{ fromJson (ConfigOption "tls_json") | dig "cert" "Cert" "" }}
- - name: tls_key
- title: TLS Key
- type: textarea
- default: repl{{ fromJson (ConfigOption "tls_json") | dig "cert" "Key" "" }}
-```
-
-The following image shows how the default values for the CA, certificate, and key are displayed on the **Config** page:
-
-
-
-[View a larger version of this image](/images/certificate-chain-default-values.png)
-
-## Additional Examples
-
-The following topics include additional examples of using KOTS template functions in Kubernetes manifests deployed by KOTS or in KOTS custom resources:
-
-* [Add Status Informers](/vendor/admin-console-display-app-status#add-status-informers) in _Adding Resource Status Informers_
-* [Conditionally Including or Excluding Resources](/vendor/packaging-include-resources)
-* [Example: Including Optional Helm Charts](/vendor/helm-optional-charts)
-* [Example: Adding Database Configuration Options](/vendor/tutorial-adding-db-config)
-* [Templating Annotations](/vendor/resources-annotations-templating)
-* [Tutorial: Set Helm Chart Values with KOTS](/vendor/tutorial-config-setup)
\ No newline at end of file
diff --git a/docs/reference/template-functions-identity-context.md b/docs/reference/template-functions-identity-context.md
deleted file mode 100644
index b56aae4734..0000000000
--- a/docs/reference/template-functions-identity-context.md
+++ /dev/null
@@ -1,115 +0,0 @@
-# Identity Context
-
-## IdentityServiceEnabled
-
-```go
-func IdentityServiceEnabled() bool
-```
-
-Returns true if the Replicated identity service has been enabled and configured by the end customer.
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-...
- env:
- - name: IDENTITY_ENABLED
- value: repl{{ IdentityServiceEnabled }}
-```
-
-
-## IdentityServiceClientID
-
-```go
-func IdentityServiceClientID() string
-```
-
-Returns the client ID required for the application to connect to the identity service OIDC server.
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-...
- env:
- - name: CLIENT_ID
- value: repl{{ IdentityServiceClientID }}
-```
-
-
-## IdentityServiceClientSecret
-
-```go
-func IdentityServiceClientSecret() (string, error)
-```
-
-Returns the client secret required for the application to connect to the identity service OIDC server.
-
-```yaml
-apiVersion: v1
-kind: Secret
-...
-data:
- CLIENT_SECRET: repl{{ IdentityServiceClientSecret | b64enc }}
-```
-
-
-## IdentityServiceRoles
-
-```go
-func IdentityServiceRoles() map[string][]string
-```
-
-Returns a list of groups specified by the customer mapped to a list of roles as defined in the Identity custom resource manifest file.
-
-For more information about roles in the Identity custom resource, see [Identity](custom-resource-identity#roles) in the _Custom resources_ section.
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-...
- env:
- - name: RESTRICTED_GROUPS
- value: repl{{ IdentityServiceRoles | keys | toJson }}
-```
-
-
-## IdentityServiceName
-
-```go
-func IdentityServiceName() string
-```
-
-Returns the Service name for the identity service OIDC server.
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-...
- - path: /dex
- backend:
- service:
- name: repl{{ IdentityServiceName }}
- port:
- number: repl{{ IdentityServicePort }}
-```
-
-
-## IdentityServicePort
-
-```go
-func IdentityServicePort() string
-```
-
-Returns the Service port number for the identity service OIDC server.
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-...
- - path: /dex
- backend:
- service:
- name: repl{{ IdentityServiceName }}
- port:
- number: repl{{ IdentityServicePort }}
-```
diff --git a/docs/reference/template-functions-kurl-context.md b/docs/reference/template-functions-kurl-context.md
deleted file mode 100644
index 8885a80709..0000000000
--- a/docs/reference/template-functions-kurl-context.md
+++ /dev/null
@@ -1,83 +0,0 @@
-# kURL Context
-
-## kURL Context Functions
-
-For applications installed in embedded clusters created with Replicated kURL, you can use template functions to show all options the cluster was installed with.
-
-The creation of the Installer custom resource will reflect both install script changes made by posting YAML to the kURL API and changes made with -s flags at runtime. These functions are not available on the config page.
-
-KurlBool, KurlInt, KurlString, and KurlOption all take a string yamlPath as a param.
-This path is the path from the manifest file, and is delineated between addon and subfield by a period ’.’.
-For example, the kURL Kubernetes version can be accessed as `{{repl KurlString "Kubernetes.Version" }}`.
-
-KurlBool, KurlInt, KurlString respectively return a bool, integer, and string value.
-If used on a valid field but with the wrong type these will return the falsy value for their type, false, 0, and “string respectively.
-The `KurlOption` function will convert all bool, int, and string fields to string.
-All functions will return falsy values if there is nothing at the yamlPath specified, or if these functions are run in a cluster with no installer custom resource (as in, not a cluster created by kURL).
-
-The following provides a complete list of the Installer custom resource with annotations:
-
-## KurlBool
-
-```go
-func KurlBool(yamlPath string) bool
-```
-
-Returns the value at the yamlPath if there is a valid boolean there, or false if there is not.
-
-```yaml
-'{{repl KurlBool "Docker.NoCEonEE" }}'
-```
-
-
-## KurlInt
-
-```go
-func KurlInt(yamlPath string) int
-```
-
-Returns the value at the yamlPath if there is a valid integer there, or 0 if there is not.
-
-```yaml
-'{{repl KurlInt "Rook.CephReplicaCount" }}'
-```
-
-
-## KurlString
-
-```go
-func KurlString(yamlPath string) string
-```
-
-Returns the value at the yamlPath if there is a valid string there, or "" if there is not.
-
-```yaml
-'{{repl KurlString "Kubernetes.Version" }}'
-```
-
-
-## KurlOption
-
-```go
-func KurlOption(yamlPath string) string
-```
-
-Returns the value at the yamlPath if there is a valid string, int, or bool value there, or "" if there is not.
-Int and Bool values will be converted to string values.
-
-```yaml
-'{{repl KurlOption "Rook.CephReplicaCount" }}'
-```
-
-
-## KurlAll
-
-```go
-func KurlAll() string
-```
-
-Returns all values in the Installer custom resource as key:value pairs, sorted by key.
-
-```yaml
-'{{repl KurlAll }}'
-```
diff --git a/docs/reference/template-functions-license-context.md b/docs/reference/template-functions-license-context.md
deleted file mode 100644
index de301dfb15..0000000000
--- a/docs/reference/template-functions-license-context.md
+++ /dev/null
@@ -1,120 +0,0 @@
-# License Context
-
-## LicenseFieldValue
-```go
-func LicenseFieldValue(name string) string
-```
-LicenseFieldValue returns the value of the specified license field. LicenseFieldValue accepts custom license fields and all built-in license fields. For a list of all built-in fields, see [Built-In License Fields](/vendor/licenses-using-builtin-fields).
-
-LicenseFieldValue always returns a string, regardless of the license field type. To return integer or boolean values, you need to use the [ParseInt](/reference/template-functions-static-context#parseint) or [ParseBool](/reference/template-functions-static-context#parsebool) template function to convert the string value.
-
-#### String License Field
-
-The following example returns the value of the built-in `customerName` license field:
-
-```yaml
-customerName: '{{repl LicenseFieldValue "customerName" }}'
-```
-#### Integer License Field
-
-The following example returns the value of a custom integer license field named `numSeats`:
-
-```yaml
-numSeats: repl{{ LicenseFieldValue "numSeats" | ParseInt }}
-```
-This example uses [ParseInt](/reference/template-functions-static-context#parseint) to convert the returned value to an integer.
-
-#### Boolean License Field
-
-The following example returns the value of a custom boolean license field named `feature-1`:
-
-```yaml
-feature-1: repl{{ LicenseFieldValue "feature-1" | ParseBool }}
-```
-This example uses [ParseBool](/reference/template-functions-static-context#parsebool) to convert the returned value to a boolean.
-
-## LicenseDockerCfg
-```go
-func LicenseDockerCfg() string
-```
-LicenseDockerCfg returns a value that can be written to a secret if needed to deploy manually.
-Replicated KOTS creates and injects this secret automatically in normal conditions, but some deployments (with static, additional namespaces) may need to include this.
-
-```yaml
-apiVersion: v1
-kind: Secret
-type: kubernetes.io/dockerconfigjson
-metadata:
- name: myapp-registry
- namespace: my-other-namespace
-data:
- .dockerconfigjson: repl{{ LicenseDockerCfg }}
-```
-
-## Sequence
-
-```go
-func Sequence() int64
-```
-Sequence is the sequence of the application deployed.
-This will start at 0 for each installation, and increase with every app update, config change, license update and registry setting change.
-
-```yaml
-'{{repl Sequence }}'
-```
-
-## Cursor
-
-```go
-func Cursor() string
-```
-Cursor is the channel sequence of the app.
-For instance, if 5 releases have been promoted to the channel that the app is running, then this would return the string `5`.
-
-```yaml
-'{{repl Cursor }}'
-```
-
-## ChannelName
-
-```go
-func ChannelName() string
-```
-ChannelName is the name of the deployed channel of the app.
-
-```yaml
-'{{repl ChannelName }}'
-```
-
-## VersionLabel
-
-```go
-func VersionLabel() string
-```
-VersionLabel is the semantic version of the app, as specified when promoting a release to a channel.
-
-```yaml
-'{{repl VersionLabel }}'
-```
-
-## ReleaseNotes
-
-```go
-func ReleaseNotes() string
-```
-ReleaseNotes is the release notes of the current version of the app.
-
-```yaml
-'{{repl ReleaseNotes }}'
-```
-
-## IsAirgap
-
-```go
-func IsAirgap() bool
-```
-IsAirgap is `true` when the app is installed via uploading an airgap package, false otherwise.
-
-```yaml
-'{{repl IsAirgap }}'
-```
diff --git a/docs/reference/template-functions-static-context.md b/docs/reference/template-functions-static-context.md
deleted file mode 100644
index 58de3cf2a5..0000000000
--- a/docs/reference/template-functions-static-context.md
+++ /dev/null
@@ -1,632 +0,0 @@
-# Static Context
-
-## About Mastermind Sprig
-
-Many of the utility functions provided come from sprig, a third-party library of Go template functions.
-For more information, see [Sprig Function Documentation](https://masterminds.github.io/sprig/) on the sprig website.
-
-## Certificate Functions
-
-### PrivateCACert
-
->Introduced in KOTS v1.117.0
-
-```yaml
-func PrivateCACert() string
-```
-
-PrivateCACert returns the name of a ConfigMap that contains private CA certificates provided by the end user. For Embedded Cluster installations, these certificates are provided with the `--private-ca` flag for the `install` command. For KOTS installations, the user provides the ConfigMap using the `--private-ca-configmap` flag for the `install` command.
-
-You can use this template function to mount the specified ConfigMap so your containers can access the internet through enterprise proxies that issue their own TLS certificates in order to inspect traffic.
-
-:::note
-This function will return the name of the ConfigMap even if the ConfigMap has no entries. If no ConfigMap exists, this function returns the empty string.
-:::
-
-## Cluster Information Functions
-
-### Distribution
-```go
-func Distribution() string
-```
-Distribution returns the Kubernetes distribution detected. The possible return values are:
-
-* aks
-* digitalOcean
-* dockerDesktop
-* eks
-* embedded-cluster
-* gke
-* ibm
-* k0s
-* k3s
-* kind
-* kurl
-* microk8s
-* minikube
-* oke
-* openShift
-* rke2
-
-:::note
-[IsKurl](#iskurl) can also be used to detect kURL instances.
-:::
-
-#### Detect the Distribution
-```yaml
-repl{{ Distribution }}
-```
-#### Equal To Comparison
-```yaml
-repl{{ eq Distribution "gke" }}
-```
-#### Not Equal To Comparison
-```yaml
-repl{{ ne Distribution "embedded-cluster" }}
-```
-See [Functions](https://pkg.go.dev/text/template#hdr-Functions) in the Go documentation.
-
-### IsKurl
-```go
-func IsKurl() bool
-```
-IsKurl returns true if running within a kurl-based installation.
-#### Detect kURL Installations
-```yaml
-repl{{ IsKurl }}
-```
-#### Detect Non-kURL Installations
-```yaml
-repl{{ not IsKurl }}
-```
-See [Functions](https://pkg.go.dev/text/template#hdr-Functions) in the Go documentation.
-
-### KotsVersion
-
-```go
-func KotsVersion() string
-```
-
-KotsVersion returns the current version of KOTS.
-
-```yaml
-repl{{ KotsVersion }}
-```
-
-You can compare the KOTS version as follows:
-```yaml
-repl{{KotsVersion | semverCompare ">= 1.19"}}
-```
-
-This returns `true` if the KOTS version is greater than or equal to `1.19`.
-
-For more complex comparisons, see [Semantic Version Functions](https://masterminds.github.io/sprig/semver.html) in the sprig documentation.
-
-### KubernetesMajorVersion
-
-> Introduced in KOTS v1.92.0
-
-```go
-func KubernetesMajorVersion() string
-```
-
-KubernetesMajorVersion returns the Kubernetes server *major* version.
-
-```yaml
-repl{{ KubernetesMajorVersion }}
-```
-
-You can compare the Kubernetes major version as follows:
-```yaml
-repl{{lt (KubernetesMajorVersion | ParseInt) 2 }}
-```
-
-This returns `true` if the Kubernetes major version is less than `2`.
-
-### KubernetesMinorVersion
-
-> Introduced in KOTS v1.92.0
-
-```go
-func KubernetesMinorVersion() string
-```
-
-KubernetesMinorVersion returns the Kubernetes server *minor* version.
-
-```yaml
-repl{{ KubernetesMinorVersion }}
-```
-
-You can compare the Kubernetes minor version as follows:
-```yaml
-repl{{gt (KubernetesMinorVersion | ParseInt) 19 }}
-```
-
-This returns `true` if the Kubernetes minor version is greater than `19`.
-
-### KubernetesVersion
-
-> Introduced in KOTS v1.92.0
-
-```go
-func KubernetesVersion() string
-```
-
-KubernetesVersion returns the Kubernetes server version.
-
-```yaml
-repl{{ KubernetesVersion }}
-```
-
-You can compare the Kubernetes version as follows:
-```yaml
-repl{{KubernetesVersion | semverCompare ">= 1.19"}}
-```
-
-This returns `true` if the Kubernetes version is greater than or equal to `1.19`.
-
-For more complex comparisons, see [Semantic Version Functions](https://masterminds.github.io/sprig/semver.html) in the sprig documentation.
-
-### Namespace
-```go
-func Namespace() string
-```
-Namespace returns the Kubernetes namespace that the application belongs to.
-```yaml
-'{{repl Namespace}}'
-```
-
-### NodeCount
-```go
-func NodeCount() int
-```
-NodeCount returns the number of nodes detected within the Kubernetes cluster.
-```yaml
-repl{{ NodeCount }}
-```
-
-### Lookup
-
-> Introduced in KOTS v1.103.0
-
-```go
-func Lookup(apiversion string, resource string, namespace string, name string) map[string]interface{}
-```
-
-Lookup searches resources in a running cluster and returns a resource or resource list.
-
-Lookup uses the Helm lookup function to search resources and has the same functionality as the Helm lookup function. For more information, see [lookup](https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function) in the Helm documentation.
-
-```yaml
-repl{{ Lookup "API_VERSION" "KIND" "NAMESPACE" "NAME" }}
-```
-
-Both `NAME` and `NAMESPACE` are optional and can be passed as an empty string ("").
-
-The following combination of parameters are possible:
-
-| Behavior | -Lookup function | -
|---|---|
kubectl get pod mypod -n mynamespace |
- repl{{ Lookup "v1" "Pod" "mynamespace" "mypod" }} |
-
kubectl get pods -n mynamespace |
- repl{{ Lookup "v1" "Pod" "mynamespace" "" }} |
-
kubectl get pods --all-namespaces |
- repl{{ Lookup "v1" "Pod" "" "" }} |
-
kubectl get namespace mynamespace |
- repl{{ Lookup "v1" "Namespace" "" "mynamespace" }} |
-
kubectl get namespaces |
- repl{{ Lookup "v1" "Namespace" "" "" }} |
-
| readonly | -hidden | -Outcome | -Use Case | -
|---|---|---|---|
| false | -true | -Persistent | -
- Set
|
-
| true | -false | -Ephemeral | -
- Set
|
-
| true | -true | -Ephemeral | -
- Set
|
-
| false | -false | -Persistent | -
- Set
For example, set both |
-
| Type | +Description | +
|---|---|
| Supported Kubernetes Distributions | +EKS (AWS S3) | +
| Cost | +Flat fee of $0.50 per bucket. | +
| Options | +
+
|
+
| Data | +
+
|
+
+
+ [View a larger version of this image](/images/create-a-cluster.png)
+
+1. On the **Create a cluster** page, complete the following fields:
+
+ | Field | +Description | +
|---|---|
| Kubernetes distribution | +Select the Kubernetes distribution for the cluster. | +
| Version | +Select the Kubernetes version for the cluster. The options available are specific to the distribution selected. | +
| Name (optional) | +Enter an optional name for the cluster. | +
| Tags | +Add one or more tags to the cluster as key-value pairs. | +
| Set TTL | +Select the Time to Live (TTL) for the cluster. When the TTL expires, the cluster is automatically deleted. TTL can be adjusted after cluster creation with [cluster update ttl](/reference/replicated-cli-cluster-update-ttl). | +
| Instance type | +Select the instance type to use for the nodes in the node group. The options available are specific to the distribution selected. | +
| Disk size | +Select the disk size in GiB to use per node. | +
| Nodes | +Select the number of nodes to provision in the node group. The options available are specific to the distribution selected. | +
+
+ [View a larger version of this image](/images/cmx-assigned-cluster.png)
+
+### Prepare Clusters
+
+For applications distributed with the Replicated Vendor Portal, the [`cluster prepare`](/reference/replicated-cli-cluster-prepare) command reduces the number of steps required to provision a cluster and then deploy a release to the cluster for testing. This is useful in continuous integration (CI) workflows that run multiple times a day. For an example workflow that uses the `cluster prepare` command, see [Recommended CI/CD Workflows](/vendor/ci-workflows).
+
+The `cluster prepare` command does the following:
+* Creates a cluster
+* Creates a release for your application based on either a Helm chart archive or a directory containing the application YAML files
+* Creates a temporary customer of type `test`
+ :::note
+ Test customers created by the `cluster prepare` command are not saved in your Vendor Portal team.
+ :::
+* Installs the release in the cluster using either the Helm CLI or Replicated KOTS
+
+The `cluster prepare` command requires either a Helm chart archive or a directory containing the application YAML files to be installed:
+
+* **Install a Helm chart with the Helm CLI**:
+
+ ```bash
+ replicated cluster prepare \
+ --distribution K8S_DISTRO \
+ --version K8S_VERSION \
+ --chart HELM_CHART_TGZ
+ ```
+ The following example creates a kind cluster and installs a Helm chart in the cluster using the `nginx-chart-0.0.14.tgz` chart archive:
+ ```bash
+ replicated cluster prepare \
+ --distribution kind \
+ --version 1.27.0 \
+ --chart nginx-chart-0.0.14.tgz \
+ --set key1=val1,key2=val2 \
+ --set-string s1=val1,s2=val2 \
+ --set-json j1='{"key1":"val1","key2":"val2"}' \
+ --set-literal l1=val1,l2=val2 \
+ --values values.yaml
+ ```
+
+* **Install with KOTS from a YAML directory**:
+
+ ```bash
+ replicated cluster prepare \
+ --distribution K8S_DISTRO \
+ --version K8S_VERSION \
+ --yaml-dir PATH_TO_YAML_DIR
+ ```
+ The following example creates a k3s cluster and installs an application in the cluster using the manifest files in a local directory named `config-validation`:
+ ```bash
+ replicated cluster prepare \
+ --distribution k3s \
+ --version 1.26 \
+ --namespace config-validation \
+ --shared-password password \
+ --app-ready-timeout 10m \
+ --yaml-dir config-validation \
+ --config-values-file conifg-values.yaml \
+ --entitlements "num_of_queues=5"
+ ```
+
+For command usage, including additional options, see [cluster prepare](/reference/replicated-cli-cluster-prepare).
+
+### Access Clusters
+
+Compatibility Matrix provides the kubeconfig for clusters so that you can access clusters with the kubectl command line tool. For more information, see [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/) in the Kubernetes documentation.
+
+To access a cluster from the command line:
+
+1. Verify that the cluster is in a Running state:
+
+ ```bash
+ replicated cluster ls
+ ```
+ In the output of the command, verify that the `STATUS` for the target cluster is `running`. For command usage, see [cluster ls](/reference/replicated-cli-cluster-ls).
+
+1. Run the following command to open a new shell session with the kubeconfig configured for the cluster:
+
+ ```bash
+ replicated cluster shell CLUSTER_ID
+ ```
+ Where `CLUSTER_ID` is the unique ID for the running cluster that you want to access.
+
+ For command usage, see [cluster shell](/reference/replicated-cli-cluster-shell).
+
+1. Verify that you can interact with the cluster through kubectl by running a command. For example:
+
+ ```bash
+ kubectl get ns
+ ```
+
+1. Press Ctrl-D or type `exit` when done to end the shell and the connection to the server.
+
+### Upgrade Clusters (kURL Only)
+
+For kURL clusters provisioned with Compatibility Matrix, you can use the the `cluster upgrade` command to upgrade the version of the kURL installer specification used to provision the cluster. A recommended use case for the `cluster upgrade` command is for testing your application's compatibility with Kubernetes API resource version migrations after upgrade.
+
+The following example upgrades a kURL cluster from its previous version to version `9d5a44c`:
+
+```bash
+replicated cluster upgrade cabb74d5 --version 9d5a44c
+```
+
+For command usage, see [cluster upgrade](/reference/replicated-cli-cluster-upgrade).
+
+### Delete Clusters
+
+You can delete clusters using the Replicated CLI or the Vendor Portal.
+
+#### Replicated CLI
+
+To delete a cluster using the Replicated CLI:
+
+1. Get the ID of the target cluster:
+
+ ```
+ replicated cluster ls
+ ```
+ In the output of the command, copy the ID for the cluster.
+
+ **Example:**
+
+ ```
+ ID NAME DISTRIBUTION VERSION STATUS CREATED EXPIRES
+ 1234abc My Test Cluster eks 1.27 running 2023-10-09 17:08:01 +0000 UTC -
+ ```
+
+ For command usage, see [cluster ls](/reference/replicated-cli-cluster-ls).
+
+1. Run the following command:
+
+ ```
+ replicated cluster rm CLUSTER_ID
+ ```
+ Where `CLUSTER_ID` is the ID of the target cluster.
+ For command usage, see [cluster rm](/reference/replicated-cli-cluster-rm).
+1. Confirm that the cluster was deleted:
+ ```
+ replicated cluster ls CLUSTER_ID --show-terminated
+ ```
+ Where `CLUSTER_ID` is the ID of the target cluster.
+ In the output of the command, you can see that the `STATUS` of the cluster is `terminated`. For command usage, see [cluster ls](/reference/replicated-cli-cluster-ls).
+#### Vendor Portal
+
+To delete a cluster using the Vendor Portal:
+
+1. Go to **Compatibility Matrix**.
+
+1. Under **Clusters**, in the vertical dots menu for the target cluster, click **Delete cluster**.
+
+
+
+ [View a larger version of this image](/images/cmx-delete-cluster.png)
+
+## About Using Compatibility Matrix with CI/CD
+
+Replicated recommends that you integrate Compatibility Matrix into your existing CI/CD workflow to automate the process of creating clusters to install your application and run tests. For more information, including additional best practices and recommendations for CI/CD, see [About Integrating with CI/CD](/vendor/ci-overview).
+
+### Replicated GitHub Actions
+
+Replicated maintains a set of custom GitHub actions that are designed to replace repetitive tasks related to using Compatibility Matrix and distributing applications with Replicated.
+
+If you use GitHub Actions as your CI/CD platform, you can include these custom actions in your workflows rather than using Replicated CLI commands. Integrating the Replicated GitHub actions into your CI/CD pipeline helps you quickly build workflows with the required inputs and outputs, without needing to manually create the required CLI commands for each step.
+
+To view all the available GitHub actions that Replicated maintains, see the [replicatedhq/replicated-actions](https://github.com/replicatedhq/replicated-actions/) repository in GitHub.
+
+For more information, see [Integrating Replicated GitHub Actions](/vendor/ci-workflows-github-actions).
+
+### Recommended Workflows
+
+Replicated recommends that you maintain unique CI/CD workflows for development (continuous integration) and for releasing your software (continuous delivery). For example development and release workflows that integrate Compatibility Matrix for testing, see [Recommended CI/CD Workflows](/vendor/ci-workflows).
+
+### Test Script Recommendations
+
+Incorporating code tests into your CI/CD workflows is important for ensuring that developers receive quick feedback and can make updates in small iterations. Replicated recommends that you create and run all of the following test types as part of your CI/CD workflows:
+
+
+
+[View a larger version of this image](/images/compatibility-matrix-ingress.png)
+
+## Managing Compatibility Matrix Tunnels {#manage-nodes}
+
+Tunnels are viewed, created, and removed using the Compatibility Matrix UI within Vendor Portal, the Replicated CLI, GitHub Actions, or directly with the Vendor API v3. There is no limit to the number of tunnels you can create for a cluster and multiple tunnels can connect to a single service, if desired.
+
+### Limitations
+
+Compatibility Matrix tunnels have the following limitations:
+* One tunnel can only connect to one service. If you need fanout routing into different services, consider installing the nginx ingress controller as a `NodePort` service and exposing it.
+* Tunnels are not supported for cloud distributions (EKS, GKE, AKS).
+
+### Supported Protocols
+
+A tunnel can support one or more protocols.
+The supported protocols are HTTP, HTTPS, WS and WSS.
+GRPC and other protocols are not routed into the cluster.
+
+### Exposing Ports
+Once you have a node port available on the cluster, you can use the Replicated CLI to expose the node port to the public internet.
+This can be used multiple times on a single cluster.
+
+Optionally, you can specify the `--wildcard` flag to expose this port with wildcard DNS and TLS certificate.
+This feature adds extra time to provision the port, so it should only be used if necessary.
+
+```bash
+replicated cluster port expose \
+ [cluster id] \
+ --port [node port] \
+ --protocol [protocol] \
+ --wildcard
+```
+
+For example, if you have the nginx ingress controller installed and the node port is 32456:
+
+```bash
+% replicated cluster ls
+ID NAME DISTRIBUTION VERSION STATUS
+1e616c55 tender_ishizaka k3s 1.29.2 running
+
+% replicated cluster port expose \
+ 1e616c55 \
+ --port 32456 \
+ --protocol http \
+ --protocol https \
+ --wildcard
+```
+
+:::note
+You can expose a node port that does not yet exist in the cluster.
+This is useful if you have a deterministic node port, but need the DNS name as a value in your Helm chart.
+:::
+
+### Viewing Ports
+To view all exposed ports, use the Replicated CLI `port ls` subcommand with the cluster ID:
+
+```bash
+% replicated cluster port ls 1e616c55
+ID CLUSTER PORT PROTOCOL EXPOSED PORT WILDCARD STATUS
+d079b2fc 32456 http http://happy-germain.ingress.replicatedcluster.com true ready
+
+d079b2fc 32456 https https://happy-germain.ingress.replicatedcluster.com true ready
+```
+
+### Removing Ports
+Exposed ports are automatically deleted when a cluster terminates.
+If you want to remove a port (and the associated DNS records and TLS certs) prior to cluster termination, run the `port rm` subcommand with the cluster ID:
+
+```bash
+% replicated cluster port rm 1e616c55 --id d079b2fc
+```
+
+You can remove just one protocol, or all.
+Removing all protocols also removes the DNS record and TLS cert.
diff --git a/docs/reference/testing-pricing.mdx b/docs/reference/testing-pricing.mdx
new file mode 100644
index 0000000000..3023c79e78
--- /dev/null
+++ b/docs/reference/testing-pricing.mdx
@@ -0,0 +1,822 @@
+# Compatibility Matrix Pricing
+
+This topic describes the pricing for Replicated Compatibility Matrix.
+
+## Pricing Overview
+
+Compatibility Matrix usage-based pricing includes a $0.50 per cluster startup cost, plus by the minute pricing based on instance size and count (starting at the time the cluster state changed to "running" and ending when the cluster is either expired (TTL) or removed). Minutes will be rounded up, so there will be a minimum charge of $0.50 plus 1 minute for all running clusters. Each cluster's cost will be rounded up to the nearest cent and subtracted from the available credits in the team account. Remaining credit balance is viewable on the Replicated Vendor Portal [Cluster History](https://vendor.replicated.com/compatibility-matrix/history) page or with the Vendor API v3 [/vendor/v3/cluster/stats](https://replicated-vendor-api.readme.io/reference/getclusterstats) endpoint. Cluster [add-ons](/vendor/testing-cluster-addons) may incur additional charges.
+
+If the team's available credits are insufficient to run the cluster for the full duration of the TTL, the cluster creation will be rejected.
+
+## Cluster Quotas
+
+Each team is limited by the number of clusters that they can run concurrently. To increase the quota, reach out to your account manager.
+
+## VM Cluster Pricing (Openshift, RKE2, K3s, Kind, Embedded Cluster, kURL)
+
+VM-based clusters approximately match the AWS m6.i instance type pricing.
+
+| Instance Type | +VCPUs | +Memory (GiB) | +USD/Credit per hour | +
|---|---|---|---|
| r1.small | +2 | +8 | +$0.096 | +
| r1.medium | +4 | +16 | +$0.192 | +
| r1.large | +8 | +32 | +$0.384 | +
| r1.xlarge | +16 | +64 | +$0.768 | +
| r1.2xlarge | +32 | +128 | +$1.536 | +
| Instance Type | +VCPUs | +Memory (GiB) | +USD/Credit per hour | +
|---|---|---|---|
| m6i.large | +2 | +8 | +$0.115 | +
| m6i.xlarge | +4 | +16 | +$0.230 | +
| m6i.2xlarge | +8 | +32 | +$0.461 | +
| m6i.4xlarge | +16 | +64 | +$0.922 | +
| m6i.8xlarge | +32 | +128 | +$1.843 | +
| m7i.large | +2 | +8 | +$0.121 | +
| m7i.xlarge | +4 | +16 | +$0.242 | +
| m7i.2xlarge | +8 | +32 | +$0.484 | +
| m7i.4xlarge | +16 | +64 | +$0.968 | +
| m7i.8xlarge | +32 | +128 | +$1.935 | +
| m5.large | +2 | +8 | +$0.115 | +
| m5.xlarge | +4 | +16 | +$0.230 | +
| m5.2xlarge | +8 | +32 | +$0.461 | +
| m5.4xlarge | +16 | +64 | +$0.922 | +
| m5.8xlarge | +32 | +128 | +$1.843 | +
| m7g.large | +2 | +8 | +$0.098 | +
| m7g.xlarge | +4 | +16 | +$0.195 | +
| m7g.2xlarge | +8 | +32 | +$0.392 | +
| m7g.4xlarge | +16 | +64 | +$0.784 | +
| m7g.8xlarge | +32 | +128 | +$1.567 | +
| c5.large | +2 | +4 | +$0.102 | +
| c5.xlarge | +4 | +8 | +$0.204 | +
| c5.2xlarge | +8 | +16 | +$0.408 | +
| c5.4xlarge | +16 | +32 | +$0.816 | +
| c5.9xlarge | +36 | +72 | +$1.836 | +
| g4dn.xlarge | +4 | +16 | +$0.631 | +
| g4dn.2xlarge | +8 | +32 | +$0.902 | +
| g4dn.4xlarge | +16 | +64 | +$1.445 | +
| g4dn.8xlarge | +32 | +128 | +$2.611 | +
| g4dn.12xlarge | +48 | +192 | +$4.964 | +
| g4dn.16xlarge | +64 | +256 | +$5.222 | +
| Instance Type | +VCPUs | +Memory (GiB) | +USD/Credit per hour | +
|---|---|---|---|
| n2-standard-2 | +2 | +8 | +$0.117 | +
| n2-standard-4 | +4 | +16 | +$0.233 | +
| n2-standard-8 | +8 | +32 | +$0.466 | +
| n2-standard-16 | +16 | +64 | +$0.932 | +
| n2-standard-32 | +32 | +128 | +$1.865 | +
| t2a-standard-2 | +2 | +8 | +$0.092 | +
| t2a-standard-4 | +4 | +16 | +$0.185 | +
| t2a-standard-8 | +8 | +32 | +$0.370 | +
| t2a-standard-16 | +16 | +64 | +$0.739 | +
| t2a-standard-32 | +32 | +128 | +$1.478 | +
| t2a-standard-48 | +48 | +192 | +$2.218 | +
| e2-standard-2 | +2 | +8 | +$0.081 | +
| e2-standard-4 | +4 | +16 | +$0.161 | +
| e2-standard-8 | +8 | +32 | +$0.322 | +
| e2-standard-16 | +16 | +64 | +$0.643 | +
| e2-standard-32 | +32 | +128 | +$1.287 | +
| n1-standard-1+nvidia-tesla-t4+1 | +1 | +3.75 | +$0.321 | +
| n1-standard-1+nvidia-tesla-t4+2 | +1 | +3.75 | +$0.585 | +
| n1-standard-1+nvidia-tesla-t4+4 | +1 | +3.75 | +$1.113 | +
| n1-standard-2+nvidia-tesla-t4+1 | +2 | +7.50 | +$0.378 | +
| n1-standard-2+nvidia-tesla-t4+2 | +2 | +7.50 | +$0.642 | +
| n1-standard-2+nvidia-tesla-t4+4 | +2 | +7.50 | +$1.170 | +
| n1-standard-4+nvidia-tesla-t4+1 | +4 | +15 | +$0.492 | +
| n1-standard-4+nvidia-tesla-t4+2 | +4 | +15 | +$0.756 | +
| n1-standard-4+nvidia-tesla-t4+4 | +4 | +15 | +$1.284 | +
| n1-standard-8+nvidia-tesla-t4+1 | +8 | +30 | +$0.720 | +
| n1-standard-8+nvidia-tesla-t4+2 | +8 | +30 | +$0.984 | +
| n1-standard-8+nvidia-tesla-t4+4 | +8 | +30 | +$1.512 | +
| n1-standard-16+nvidia-tesla-t4+1 | +16 | +60 | +$1.176 | +
| n1-standard-16+nvidia-tesla-t4+2 | +16 | +60 | +$1.440 | +
| n1-standard-16+nvidia-tesla-t4+4 | +16 | +60 | +$1.968 | +
| n1-standard-32+nvidia-tesla-t4+1 | +32 | +120 | +$2.088 | +
| n1-standard-32+nvidia-tesla-t4+2 | +32 | +120 | +$2.352 | +
| n1-standard-32+nvidia-tesla-t4+4 | +32 | +120 | +$2.880 | +
| n1-standard-64+nvidia-tesla-t4+1 | +64 | +240 | +$3.912 | +
| n1-standard-64+nvidia-tesla-t4+2 | +64 | +240 | +$4.176 | +
| n1-standard-64+nvidia-tesla-t4+4 | +64 | +240 | +$4.704 | +
| n1-standard-96+nvidia-tesla-t4+1 | +96 | +360 | +$5.736 | +
| n1-standard-96+nvidia-tesla-t4+2 | +96 | +360 | +$6.000 | +
| n1-standard-96+nvidia-tesla-t4+4 | +96 | +360 | +$6.528 | +
| Instance Type | +VCPUs | +Memory (GiB) | +Rate | +List Price | +USD/Credit per hour | +
|---|---|---|---|---|---|
| Standard_B2ms | +2 | +8 | +8320 | +$0.083 | +$0.100 | +
| Standard_B4ms | +4 | +16 | +16600 | +$0.166 | +$0.199 | +
| Standard_B8ms | +8 | +32 | +33300 | +$0.333 | +$0.400 | +
| Standard_B16ms | +16 | +64 | +66600 | +$0.666 | +$0.799 | +
| Standard_DS2_v2 | +2 | +7 | +14600 | +$0.146 | +$0.175 | +
| Standard_DS3_v2 | +4 | +14 | +29300 | +$0.293 | +$0.352 | +
| Standard_DS4_v2 | +8 | +28 | +58500 | +$0.585 | +$0.702 | +
| Standard_DS5_v2 | +16 | +56 | +117000 | +$1.170 | +$1.404 | +
| Standard_D2ps_v5 | +2 | +8 | +14600 | +$0.077 | +$0.092 | +
| Standard_D4ps_v5 | +4 | +16 | +7700 | +$0.154 | +$0.185 | +
| Standard_D8ps_v5 | +8 | +32 | +15400 | +$0.308 | +$0.370 | +
| Standard_D16ps_v5 | +16 | +64 | +30800 | +$0.616 | +$0.739 | +
| Standard_D32ps_v5 | +32 | +128 | +61600 | +$1.232 | +$1.478 | +
| Standard_D48ps_v5 | +48 | +192 | +23200 | +$1.848 | +$2.218 | +
| Standard_NC4as_T4_v3 | +4 | +28 | +52600 | +$0.526 | +$0.631 | +
| Standard_NC8as_T4_v3 | +8 | +56 | +75200 | +$0.752 | +$0.902 | +
| Standard_NC16as_T4_v3 | +16 | +110 | +120400 | +$1.204 | +$1.445 | +
| Standard_NC64as_T4_v3 | +64 | +440 | +435200 | +$4.352 | +$5.222 | +
| Standard_D2S_v5 | +2 | +8 | +9600 | +$0.096 | +$0.115 | +
| Standard_D4S_v5 | +4 | +16 | +19200 | +$0.192 | +$0.230 | +
| Standard_D8S_v5 | +8 | +32 | +38400 | +$0.384 | +$0.461 | +
| Standard_D16S_v5 | +16 | +64 | +76800 | +$0.768 | +$0.922 | +
| Standard_D32S_v5 | +32 | +128 | +153600 | +$1.536 | +$1.843 | +
| Standard_D64S_v5 | +64 | +192 | +230400 | +$2.304 | +$2.765 | +
| Instance Type | +VCPUs | +Memory (GiB) | +USD/Credit per hour | +
|---|---|---|---|
| VM.Standard2.1 | +1 | +15 | +$0.076 | +
| VM.Standard2.2 | +2 | +30 | +$0.153 | +
| VM.Standard2.4 | +4 | +60 | +$0.306 | +
| VM.Standard2.8 | +8 | +120 | +$0.612 | +
| VM.Standard2.16 | +16 | +240 | +$1.225 | +
| VM.Standard3Flex.1 | +1 | +4 | +$0.055 | +
| VM.Standard3Flex.2 | +2 | +8 | +$0.110 | +
| VM.Standard3Flex.4 | +4 | +16 | +$0.221 | +
| VM.Standard3Flex.8 | +8 | +32 | +$0.442 | +
| VM.Standard3Flex.16 | +16 | +64 | +$0.883 | +
| VM.Standard.A1.Flex.1 | +1 | +4 | +$0.019 | +
| VM.Standard.A1.Flex.2 | +2 | +8 | +$0.038 | +
| VM.Standard.A1.Flex.4 | +4 | +16 | +$0.077 | +
| VM.Standard.A1.Flex.8 | +8 | +32 | +$0.154 | +
| VM.Standard.A1.Flex.16 | +16 | +64 | +$0.309 | +
| Type | +Description | +
|---|---|
| Supported Kubernetes Versions | +{/* START_kind_VERSIONS */}1.26.15, 1.27.16, 1.28.15, 1.29.12, 1.30.8, 1.31.4, 1.32.1{/* END_kind_VERSIONS */} | +
| Supported Instance Types | +See Replicated Instance Types | +
| Node Groups | +No | +
| Node Auto Scaling | +No | +
| Nodes | +Supports a single node. | +
| IP Family | +Supports `ipv4` or `dual`. | +
| Limitations | +See Limitations | +
| Common Use Cases | +Smoke tests | +
| Type | +Description | +
|---|---|
| Supported k3s Versions | +The upstream k8s version that matches the Kubernetes version requested. | +
| Supported Kubernetes Versions | +1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.24.6, 1.24.7, 1.24.8, 1.24.9, 1.24.10, 1.24.11, 1.24.12, 1.24.13, 1.24.14, 1.24.15, 1.24.16, 1.24.17, 1.25.0, 1.25.2, 1.25.3, 1.25.4, 1.25.5, 1.25.6, 1.25.7, 1.25.8, 1.25.9, 1.25.10, 1.25.11, 1.25.12, 1.25.13, 1.25.14, 1.25.15, 1.25.16, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4, 1.26.5, 1.26.6, 1.26.7, 1.26.8, 1.26.9, 1.26.10, 1.26.11, 1.26.12, 1.26.13, 1.26.14, 1.26.15, 1.27.1, 1.27.2, 1.27.3, 1.27.4, 1.27.5, 1.27.6, 1.27.7, 1.27.8, 1.27.9, 1.27.10, 1.27.11, 1.27.12, 1.27.13, 1.27.14, 1.27.15, 1.27.16, 1.28.1, 1.28.2, 1.28.3, 1.28.4, 1.28.5, 1.28.6, 1.28.7, 1.28.8, 1.28.9, 1.28.10, 1.28.11, 1.28.12, 1.28.13, 1.28.14, 1.28.15, 1.29.0, 1.29.1, 1.29.2, 1.29.3, 1.29.4, 1.29.5, 1.29.6, 1.29.7, 1.29.8, 1.29.9, 1.29.10, 1.29.11, 1.29.12, 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6, 1.30.7, 1.30.8, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.32.0 | +
| Supported Instance Types | +See Replicated Instance Types | +
| Node Groups | +Yes | +
| Node Auto Scaling | +No | +
| Nodes | +Supports multiple nodes. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +For additional limitations that apply to all distributions, see Limitations. | +
| Common Use Cases | +
|
+
| Type | +Description | +
|---|---|
| Supported RKE2 Versions | +The upstream k8s version that matches the Kubernetes version requested. | +
| Supported Kubernetes Versions | +1.24.1, 1.24.2, 1.24.3, 1.24.4, 1.24.6, 1.24.7, 1.24.8, 1.24.9, 1.24.10, 1.24.11, 1.24.12, 1.24.13, 1.24.14, 1.24.15, 1.24.16, 1.24.17, 1.25.0, 1.25.2, 1.25.3, 1.25.4, 1.25.5, 1.25.6, 1.25.7, 1.25.8, 1.25.9, 1.25.10, 1.25.11, 1.25.12, 1.25.13, 1.25.14, 1.25.15, 1.25.16, 1.26.0, 1.26.1, 1.26.2, 1.26.3, 1.26.4, 1.26.5, 1.26.6, 1.26.7, 1.26.8, 1.26.9, 1.26.10, 1.26.11, 1.26.12, 1.26.13, 1.26.14, 1.26.15, 1.27.1, 1.27.2, 1.27.3, 1.27.4, 1.27.5, 1.27.6, 1.27.7, 1.27.8, 1.27.9, 1.27.10, 1.27.11, 1.27.12, 1.27.13, 1.27.14, 1.27.15, 1.27.16, 1.28.2, 1.28.3, 1.28.4, 1.28.5, 1.28.6, 1.28.7, 1.28.8, 1.28.9, 1.28.10, 1.28.11, 1.28.12, 1.28.13, 1.28.14, 1.28.15, 1.29.0, 1.29.1, 1.29.2, 1.29.3, 1.29.4, 1.29.5, 1.29.6, 1.29.7, 1.29.8, 1.29.9, 1.29.10, 1.29.11, 1.29.12, 1.29.13, 1.30.0, 1.30.1, 1.30.2, 1.30.3, 1.30.4, 1.30.5, 1.30.6, 1.30.7, 1.30.8, 1.30.9, 1.31.0, 1.31.1, 1.31.2, 1.31.3, 1.31.4, 1.31.5, 1.32.0, 1.32.1 | +
| Supported Instance Types | +See Replicated Instance Types | +
| Node Groups | +Yes | +
| Node Auto Scaling | +No | +
| Nodes | +Supports multiple nodes. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +For additional limitations that apply to all distributions, see Limitations. | +
| Common Use Cases | +
|
+
| Type | +Description | +
|---|---|
| Supported OpenShift Versions | +4.10.0-okd, 4.11.0-okd, 4.12.0-okd, 4.13.0-okd, 4.14.0-okd, 4.15.0-okd, 4.16.0-okd, 4.17.0-okd | +
| Supported Instance Types | +See Replicated Instance Types | +
| Node Groups | +Yes | +
| Node Auto Scaling | +No | +
| Nodes | +Supports multiple nodes for versions 4.13.0-okd and later. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +
+
For additional limitations that apply to all distributions, see Limitations. + |
+
| Common Use Cases | +Customer release tests | +
| Type | +Description | +
|---|---|
| Supported Embedded Cluster Versions | ++ Any valid release sequence that has previously been promoted to the channel where the customer license is assigned. + Version is optional and defaults to the latest available release on the channel. + | +
| Supported Instance Types | +See Replicated Instance Types | +
| Node Groups | +Yes | +
| Nodes | +Supports multiple nodes (alpha). | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +
+
For additional limitations that apply to all distributions, see Limitations. + |
+
| Common Use Cases | +Customer release tests | +
| Type | +Description | +
|---|---|
| Supported kURL Versions | +Any promoted kURL installer. Version is optional. For an installer version other than "latest", you can find the specific Installer ID for a previously promoted installer under the relevant **Install Command** (ID after kurl.sh/) on the **Channels > kURL Installer History** page in the Vendor Portal. For more information about viewing the history of kURL installers promoted to a channel, see [Installer History](/vendor/installer-history). | +
| Supported Instance Types | +See Replicated Instance Types | +
| Node Groups | +Yes | +
| Node Auto Scaling | +No | +
| Nodes | +Supports multiple nodes. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +Does not work with the Longhorn add-on. For additional limitations that apply to all distributions, see Limitations. |
+
| Common Use Cases | +Customer release tests | +
| Type | +Description | +
|---|---|
| Supported Kubernetes Versions | +1.29, 1.30, 1.31, 1.32 Extended Support Versions: 1.25, 1.26, 1.27, 1.28 |
+
| Supported Instance Types | +m6i.large, m6i.xlarge, m6i.2xlarge, m6i.4xlarge, m6i.8xlarge, m7i.large, m7i.xlarge, m7i.2xlarge, m7i.4xlarge, m7i.8xlarge, m5.large, m5.xlarge, m5.2xlarge, + m5.4xlarge, m5.8xlarge, m7g.large (arm), m7g.xlarge (arm), m7g.2xlarge (arm), m7g.4xlarge (arm), m7g.8xlarge (arm), c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, + c5.9xlarge, g4dn.xlarge (gpu), g4dn.2xlarge (gpu), g4dn.4xlarge (gpu), g4dn.8xlarge (gpu), g4dn.12xlarge (gpu), g4dn.16xlarge (gpu) g4dn instance types depend on available capacity. After a g4dn cluster is running, you also need to install your version of the NVIDIA device plugin for Kubernetes. See [Amazon EKS optimized accelerated Amazon Linux AMIs](https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html#gpu-ami) in the AWS documentation. |
+
| Node Groups | +Yes | +
| Node Auto Scaling | +Yes. Cost will be based on the max number of nodes. | +
| Nodes | +Supports multiple nodes. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +You can only choose a minor version, not a patch version. The EKS installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
+
| Common Use Cases | +Customer release tests | +
| Type | +Description | +
|---|---|
| Supported Kubernetes Versions | +1.28, 1.29, 1.30, 1.31 | +
| Supported Instance Types | +n2-standard-2, n2-standard-4, n2-standard-8, n2-standard-16, n2-standard-32, t2a-standard-2 (arm), t2a-standard-4 (arm), t2a-standard-8 (arm), t2a-standard-16 (arm), t2a-standard-32 (arm), t2a-standard-48 (arm), e2-standard-2, e2-standard-4, e2-standard-8, e2-standard-16, e2-standard-32, n1-standard-1+nvidia-tesla-t4+1 (gpu), n1-standard-1+nvidia-tesla-t4+2 (gpu), n1-standard-1+nvidia-tesla-t4+4 (gpu), n1-standard-2+nvidia-tesla-t4+1 (gpu), n1-standard-2+nvidia-tesla-t4+2 (gpu), n1-standard-2+nvidia-tesla-t4+4 (gpu), n1-standard-4+nvidia-tesla-t4+1 (gpu), n1-standard-4+nvidia-tesla-t4+2 (gpu), n1-standard-4+nvidia-tesla-t4+4 (gpu), n1-standard-8+nvidia-tesla-t4+1 (gpu), n1-standard-8+nvidia-tesla-t4+2 (gpu), n1-standard-8+nvidia-tesla-t4+4 (gpu), n1-standard-16+nvidia-tesla-t4+1 (gpu), n1-standard-16+nvidia-tesla-t4+2 (gpu), n1-standard-16+nvidia-tesla-t4+4 (gpu), n1-standard-32+nvidia-tesla-t4+1 (gpu), n1-standard-32+nvidia-tesla-t4+2 (gpu), n1-standard-32+nvidia-tesla-t4+4 (gpu), n1-standard-64+nvidia-tesla-t4+1 (gpu), n1-standard-64+nvidia-tesla-t4+2 (gpu), n1-standard-64+nvidia-tesla-t4+4 (gpu), n1-standard-96+nvidia-tesla-t4+1 (gpu), n1-standard-96+nvidia-tesla-t4+2 (gpu), n1-standard-96+nvidia-tesla-t4+4 (gpu) You can specify more than one node. |
+
| Node Groups | +Yes | +
| Node Auto Scaling | +Yes. Cost will be based on the max number of nodes. | +
| Nodes | +Supports multiple nodes. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +You can choose only a minor version, not a patch version. The GKE installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
+
| Common Use Cases | +Customer release tests | +
| Type | +Description | +
|---|---|
| Supported Kubernetes Versions | +1.28, 1.29, 1.30, 1.31 | +
| Supported Instance Types | +Standard_B2ms, Standard_B4ms, Standard_B8ms, Standard_B16ms, Standard_DS2_v2, Standard_DS3_v2, Standard_DS4_v2, Standard_DS5_v2, Standard_DS2_v5, Standard_DS3_v5, Standard_DS4_v5, Standard_DS5_v5, Standard_D2ps_v5 (arm), Standard_D4ps_v5 (arm), Standard_D8ps_v5 (arm), Standard_D16ps_v5 (arm), Standard_D32ps_v5 (arm), Standard_D48ps_v5 (arm), Standard_NC4as_T4_v3 (gpu), Standard_NC8as_T4_v3 (gpu), Standard_NC16as_T4_v3 (gpu), Standard_NC64as_T4_v3 (gpu) GPU instance types depend on available capacity. After a GPU cluster is running, you also need to install your version of the NVIDIA device plugin for Kubernetes. See [NVIDIA GPU Operator with Azure Kubernetes Service](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html) in the NVIDIA documentation. |
+
| Node Groups | +Yes | +
| Node Auto Scaling | +Yes. Cost will be based on the max number of nodes. | +
| Nodes | +Supports multiple nodes. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +You can choose only a minor version, not a patch version. The AKS installer chooses the latest patch for that minor version. For additional limitations that apply to all distributions, see Limitations. |
+
| Common Use Cases | +Customer release tests | +
| Type | +Description | +
|---|---|
| Supported Kubernetes Versions | +1.29.1, 1.30.1, 1.31.1 | +
| Supported Instance Types | +VM.Standard2.1, VM.Standard2.2, VM.Standard2.4, VM.Standard2.8, VM.Standard2.16, VM.Standard3.Flex.1, VM.Standard3.Flex.2, VM.Standard3.Flex.4, VM.Standard3.Flex.8, VM.Standard3.Flex.16, VM.Standard.A1.Flex.1 (arm), VM.Standard.A1.Flex.2 (arm), VM.Standard.A1.Flex.4 (arm), VM.Standard.A1.Flex.8 (arm), VM.Standard.A1.Flex.16 (arm) |
+
| Node Groups | +Yes | +
| Node Auto Scaling | +No. | +
| Nodes | +Supports multiple nodes. | +
| IP Family | +Supports `ipv4`. | +
| Limitations | +Provising an OKE cluster does take between 8 to 10 minutes. If needed, some timeouts in your CI pipelines might have to be adjusted. For additional limitations that apply to all distributions, see Limitations. |
+
| Common Use Cases | +Customer release tests | +
| Type | +Memory (GiB) | +VCPU Count | +
|---|---|---|
| r1.small | +8 GB | +2 VCPUs | +
| r1.medium | +16 GB | +4 VCPUs | +
| r1.large | +32 GB | +8 VCPUs | +
| r1.xlarge | +64 GB | +16 VCPUs | +
| r1.2xlarge | +128 GB | +32 VCPUs | +
This is text from a user config value: '{{repl ConfigOption "example_default_value"}}'
+This is more text from a user config value: '{{repl ConfigOption "more_text"}}'
+This is a hidden value: '{{repl ConfigOption "hidden_text"}}'
+ + ``` + This creates a reference to the `more_text` field using a Replicated KOTS template function. The ConfigOption template function renders the user input from the configuration item that you specify. For more information, see [Config Context](/reference/template-functions-config-context) in _Reference_. + +1. Save the changes to both YAML files. + +1. Change to the root `replicated-cli-tutorial` directory, then run the following command to verify that there are no errors in the YAML: + + ``` + replicated release lint --yaml-dir=manifests + ``` + +1. Create a new release and promote it to the Unstable channel: + + ``` + replicated release create --auto + ``` + + **Example output**: + + ``` + • Reading manifests from ./manifests ✓ + • Creating Release ✓ + • SEQUENCE: 2 + • Promoting ✓ + • Channel 2GxpUm7lyB2g0ramqUXqjpLHzK0 successfully set to release 2 + ``` + +1. Type `y` and press **Enter** to continue with the defaults. + + **Example output**: + + ``` + RULE TYPE FILENAME LINE MESSAGE + + • Reading manifests from ./manifests ✓ + • Creating Release ✓ + • SEQUENCE: 2 + • Promoting ✓ + • Channel 2GmYFUFzj8JOSLYw0jAKKJKFua8 successfully set to release 2 + ``` + + The release is created and promoted to the Unstable channel with `SEQUENCE: 2`. + +1. Verify that the release was promoted to the Unstable channel: + + ``` + replicated release ls + ``` + **Example output**: + + ``` + SEQUENCE CREATED EDITED ACTIVE_CHANNELS + 2 2022-11-03T19:16:24Z 0001-01-01T00:00:00Z Unstable + 1 2022-11-03T18:49:13Z 0001-01-01T00:00:00Z + ``` + +## Next Step + +Continue to [Step 9: Update the Application](tutorial-cli-update-app) to return to the Admin Console and update the application to the new version that you promoted. diff --git a/docs/reference/tutorial-cli-create-release.mdx b/docs/reference/tutorial-cli-create-release.mdx new file mode 100644 index 0000000000..142db61b77 --- /dev/null +++ b/docs/reference/tutorial-cli-create-release.mdx @@ -0,0 +1,85 @@ +# Step 4: Create a Release + +Now that you have the manifest files for the sample Kubernetes application, you can create a release for the `cli-tutorial` application and promote the release to the Unstable channel. + +By default, the Vendor Portal includes Unstable, Beta, and Stable release channels. The Unstable channel is intended for software vendors to use for internal testing, before promoting a release to the Beta or Stable channels for distribution to customers. For more information about channels, see [About Channels and Releases](releases-about). + +To create and promote a release to the Unstable channel: + +1. From the `replicated-cli-tutorial` directory, lint the application manifest files and ensure that there are no errors in the YAML: + + ``` + replicated release lint --yaml-dir=manifests + ``` + + If there are no errors, an empty list is displayed with a zero exit code: + + ```text + RULE TYPE FILENAME LINE MESSAGE + ``` + + For a complete list of the possible error, warning, and informational messages that can appear in the output of the `release lint` command, see [Linter Rules](/reference/linter). + +1. Initialize the project as a Git repository: + + ``` + git init + git add . + git commit -m "Initial Commit: CLI Tutorial" + ``` + + Initializing the project as a Git repository allows you to track your history. The Replicated CLI also reads Git metadata to help with the generation of release metadata, such as version labels. + +1. From the `replicated-cli-tutorial` directory, create a release with the default settings: + + ``` + replicated release create --auto + ``` + + The `--auto` flag generates release notes and metadata based on the Git status. + + **Example output:** + + ``` + • Reading Environment ✓ + + Prepared to create release with defaults: + + yaml-dir "./manifests" + promote "Unstable" + version "Unstable-ba710e5" + release-notes "CLI release of master triggered by exampleusername [SHA: d4173a4] [31 Oct 22 08:51 MDT]" + ensure-channel true + lint-release true + + Create with these properties? [Y/n] + ``` + +1. Type `y` and press **Enter** to confirm the prompt. + + **Example output:** + + ```text + • Reading manifests from ./manifests ✓ + • Creating Release ✓ + • SEQUENCE: 1 + • Promoting ✓ + • Channel VEr0nhJBBUdaWpPvOIK-SOryKZEwa3Mg successfully set to release 1 + ``` + The release is created and promoted to the Unstable channel. + +1. Verify that the release was promoted to the Unstable channel: + + ``` + replicated release ls + ``` + **Example output:** + + ```text + SEQUENCE CREATED EDITED ACTIVE_CHANNELS + 1 2022-10-31T14:55:35Z 0001-01-01T00:00:00Z Unstable + ``` + +## Next Step + +Continue to [Step 5: Create a Customer](tutorial-cli-create-customer) to create a customer license file that you will upload when installing the application. diff --git a/docs/reference/tutorial-cli-deploy-app.mdx b/docs/reference/tutorial-cli-deploy-app.mdx new file mode 100644 index 0000000000..14fbb7d146 --- /dev/null +++ b/docs/reference/tutorial-cli-deploy-app.mdx @@ -0,0 +1,47 @@ +# Step 7: Configure the Application + +After you install KOTS, you can log in to the KOTS Admin Console. This procedure shows you how to make a configuration change for the application from the Admin Console, which is a typical task performed by end users. + +To configure the application: + +1. Access the Admin Console using `https://localhost:8800` if the installation script is still running. Otherwise, run the following command to access the Admin Console: + + ```bash + kubectl kots admin-console --namespace NAMESPACE + ``` + + Replace `NAMESPACE` with the namespace where KOTS is installed. + +1. Enter the password that you created in [Step 6: Install KOTS and the Application](tutorial-cli-install-app-manager) to log in to the Admin Console. + + The Admin Console dashboard opens. On the Admin Console **Dashboard** tab, users can take various actions, including viewing the application status, opening the application, checking for application updates, syncing their license, and setting up application monitoring on the cluster with Prometheus. + +  + +1. On the **Config** tab, select the **Customize Text Inputs** checkbox. In the **Text Example** field, enter any text. For example, `Hello`. + +  + + This page displays configuration settings that are specific to the application. Software vendors define the fields that are displayed on this page in the KOTS Config custom resource. For more information, see [Config](/reference/custom-resource-config) in _Reference_. + +1. Click **Save config**. In the dialog that opens, click **Go to updated version**. + + The **Version history** tab opens. + +1. Click **Deploy** for the new version. Then click **Yes, deploy** in the confirmation dialog. + +  + +1. Click **Open App** to view the application in your browser. + +  + + Notice the text that you entered previously on the configuration page is displayed on the screen. + + :::note + If you do not see the new text, refresh your browser. + ::: + +## Next Step + +Continue to [Step 8: Create a New Version](tutorial-cli-create-new-version) to make a change to one of the manifest files for the `cli-tutorial` application, then use the Replicated CLI to create and promote a new release. diff --git a/docs/reference/tutorial-cli-install-app-manager.mdx b/docs/reference/tutorial-cli-install-app-manager.mdx new file mode 100644 index 0000000000..2838c93385 --- /dev/null +++ b/docs/reference/tutorial-cli-install-app-manager.mdx @@ -0,0 +1,101 @@ +# Step 6: Install KOTS and the Application + +The next step is to test the installation process for the application release that you promoted. Using the KOTS CLI, you will install KOTS and the sample application in your cluster. + +KOTS is the Replicated component that allows your users to install, manage, and upgrade your application. Users can interact with KOTS through the Admin Console or through the KOTS CLI. + +To install KOTS and the application: + +1. From the `replicated-cli-tutorial` directory, run the following command to get the installation commands for the Unstable channel, where you promoted the release for the `cli-tutorial` application: + + ``` + replicated channel inspect Unstable + ``` + + **Example output:** + + ``` + ID: 2GmYFUFzj8JOSLYw0jAKKJKFua8 + NAME: Unstable + DESCRIPTION: + RELEASE: 1 + VERSION: Unstable-d4173a4 + EXISTING: + + curl -fsSL https://kots.io/install | bash + kubectl kots install cli-tutorial/unstable + + EMBEDDED: + + curl -fsSL https://k8s.kurl.sh/cli-tutorial-unstable | sudo bash + + AIRGAP: + + curl -fSL -o cli-tutorial-unstable.tar.gz https://k8s.kurl.sh/bundle/cli-tutorial-unstable.tar.gz + # ... scp or sneakernet cli-tutorial-unstable.tar.gz to airgapped machine, then + tar xvf cli-tutorial-unstable.tar.gz + sudo bash ./install.sh airgap + ``` + This command prints information about the channel, including the commands for installing in: + * An existing cluster + * An _embedded cluster_ created by Replicated kURL + * An air gap cluster that is not connected to the internet + +1. If you have not already, configure kubectl access to the cluster you provisioned as part of [Set Up the Environment](tutorial-cli-setup#set-up-the-environment). For more information about setting the context for kubectl, see [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/) in the Kubernetes documentation. + +1. Run the `EXISTING` installation script with the following flags to automatically upload the license file and run the preflight checks at the same time you run the installation. + + **Example:** + + ``` + curl -fsSL https://kots.io/install | bash + kubectl kots install cli-tutorial/unstable \ + --license-file ./LICENSE_YAML \ + --shared-password PASSWORD \ + --namespace NAMESPACE + ``` + + Replace: + + - `LICENSE_YAML` with the local path to your license file. + - `PASSWORD` with a password to access the Admin Console. + - `NAMESPACE` with the namespace where KOTS and application will be installed. + + When the Admin Console is ready, the script prints the `https://localhost:8800` URL where you can access the Admin Console and the `http://localhost:8888` URL where you can access the application. + + **Example output**: + + ``` + • Deploying Admin Console + • Creating namespace ✓ + • Waiting for datastore to be ready ✓ + • Waiting for Admin Console to be ready ✓ + • Waiting for installation to complete ✓ + • Waiting for preflight checks to complete ✓ + + • Press Ctrl+C to exit + • Go to http://localhost:8800 to access the Admin Console + + • Go to http://localhost:8888 to access the application + ``` + +1. Verify that the Pods are running for the example NGNIX service and for kotsadm: + + ```bash + kubectl get pods --namespace NAMESPACE + ``` + + Replace `NAMESPACE` with the namespace where KOTS and application was installed. + + **Example output:** + + ```NAME READY STATUS RESTARTS AGE + kotsadm-7ccc8586b8-n7vf6 1/1 Running 0 12m + kotsadm-minio-0 1/1 Running 0 17m + kotsadm-rqlite-0 1/1 Running 0 17m + nginx-688f4b5d44-8s5v7 1/1 Running 0 11m + ``` + +## Next Step + +Continue to [Step 7: Configure the Application](tutorial-cli-deploy-app) to log in to the Admin Console and make configuration changes. diff --git a/docs/reference/tutorial-cli-install-cli.mdx b/docs/reference/tutorial-cli-install-cli.mdx new file mode 100644 index 0000000000..1eb0f60aea --- /dev/null +++ b/docs/reference/tutorial-cli-install-cli.mdx @@ -0,0 +1,80 @@ +# Step 1: Install the Replicated CLI + +In this tutorial, you use the Replicated CLI to create and promote releases for a sample application with Replicated. The Replicated CLI is the CLI for the Replicated Vendor Portal. + +This procedure describes how to create a Vendor Portal account, install the Replicated CLI on your local machine, and set up a `REPLICATED_API_TOKEN` environment variable for authentication. + +To install the Replicated CLI: + +1. Do one of the following to create an account in the Replicated Vendor Portal: + * **Join an existing team**: If you have an existing Vendor Portal team, you can ask your team administrator to send you an invitation to join. + * **Start a trial**: Alternatively, go to [vendor.replicated.com](https://vendor.replicated.com/) and click **Sign up** to create a 21-day trial account for completing this tutorial. + +1. Run the following command to use [Homebrew](https://brew.sh) to install the CLI: + + ``` + brew install replicatedhq/replicated/cli + ``` + + For the latest Linux or macOS versions of the Replicated CLI, see the [replicatedhq/replicated](https://github.com/replicatedhq/replicated/releases) releases in GitHub. + +1. Verify the installation: + + ``` + replicated version + ``` + **Example output**: + + ```json + { + "version": "0.37.2", + "git": "8664ac3", + "buildTime": "2021-08-24T17:05:26Z", + "go": { + "version": "go1.14.15", + "compiler": "gc", + "os": "darwin", + "arch": "amd64" + } + } + ``` + If you run a Replicated CLI command, such as `replicated release ls`, you see the following error message about a missing API token: + + ``` + Error: set up APIs: Please provide your API token + ``` + +1. Create an API token for the Replicated CLI: + + 1. Log in to the Vendor Portal, and go to the [Account settings](https://vendor.replicated.com/account-settings) page. + + 1. Under **User API Tokens**, click **Create user API token**. For Nickname, provide a name for the token. For Permissions, select **Read and Write**. + + For more information about User API tokens, see [User API Tokens](replicated-api-tokens#user-api-tokens) in _Generating API Tokens_. + + 1. Click **Create Token**. + + 1. Copy the string that appears in the dialog. + +1. Export the string that you copied in the previous step to an environment variable named `REPLICATED_API_TOKEN`: + + ```bash + export REPLICATED_API_TOKEN=YOUR_TOKEN + ``` + Replace `YOUR_TOKEN` with the token string that you copied from the Vendor Portal in the previous step. + +1. Verify the User API token: + + ``` + replicated release ls + ``` + + You see the following error message: + + ``` + Error: App not found: + ``` + +## Next Step + +Continue to [Step 2: Create an Application](tutorial-cli-create-app) to use the Replicated CLI to create an application. diff --git a/docs/reference/tutorial-cli-manifests.mdx b/docs/reference/tutorial-cli-manifests.mdx new file mode 100644 index 0000000000..84d15ba90c --- /dev/null +++ b/docs/reference/tutorial-cli-manifests.mdx @@ -0,0 +1,35 @@ +# Step 3: Get the Sample Manifests + +To create a release for the `cli-tutorial` application, first create the Kubernetes manifest files for the application. This tutorial provides a set of sample manifest files for a simple Kubernetes application that deploys an NGINX service. + +To get the sample manifest files: + +1. Run the following command to create and change to a `replicated-cli-tutorial` directory: + + ``` + mkdir replicated-cli-tutorial + cd replicated-cli-tutorial + ``` + +1. Create a `/manifests` directory and download the sample manifest files from the [kots-default-yaml](https://github.com/replicatedhq/kots-default-yaml) repository in GitHub: + + ``` + mkdir ./manifests + curl -fSsL https://github.com/replicatedhq/kots-default-yaml/archive/refs/heads/main.zip | \ + tar xzv --strip-components=1 -C ./manifests \ + --exclude README.md --exclude LICENSE --exclude .gitignore + ``` + +1. Verify that you can see the YAML files in the `replicated-cli-tutorial/manifests` folder: + + ``` + ls manifests/ + ``` + ``` + example-configmap.yaml example-service.yaml kots-app.yaml kots-lint-config.yaml kots-support-bundle.yaml + example-deployment.yaml k8s-app.yaml kots-config.yaml kots-preflight.yaml + ``` + +## Next Step + +Continue to [Step 4: Create a Release](tutorial-cli-create-release) to create and promote the first release for the `cli-tutorial` application using these manifest files. diff --git a/docs/reference/tutorial-cli-setup.mdx b/docs/reference/tutorial-cli-setup.mdx new file mode 100644 index 0000000000..a96155a59b --- /dev/null +++ b/docs/reference/tutorial-cli-setup.mdx @@ -0,0 +1,35 @@ +import KubernetesTraining from "../partials/getting-started/_kubernetes-training.mdx" +import LabsIntro from "../partials/getting-started/_labs-intro.mdx" +import TutorialIntro from "../partials/getting-started/_tutorial-intro.mdx" +import RelatedTopics from "../partials/getting-started/_related-topics.mdx" +import VMRequirements from "../partials/getting-started/_vm-requirements.mdx" + +# Introduction and Setup + +The KOTS Application custom resource enables features in the Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the grafana Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Grafana application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
+The Config custom resource specifies a user-facing configuration page in the Admin Console designed for collecting application configuration from users. The YAML below creates "Admin User" and "Admin Password" fields that will be shown to the user on the configuration page during installation. These fields will be used to set the login credentials for Grafana.
+The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart.
+The HelmChart custom resource below contains a values key, which creates a mapping to the Grafana values.yaml file. In this case, the values.admin.user and values.admin.password fields map to admin.user and admin.password in the Grafana values.yaml file.
During installation, KOTS renders the ConfigOption template functions in the values.admin.user and values.admin.password fields and then sets the corresponding Grafana values accordingly.
+
+ [View a larger version of this image](/images/release-promote.png)
+
+## Next Step
+
+Create a customer with the KOTS entitlement so that you can install the release in your cluster using Replicated KOTS. See [Step 5: Create a KOTS-Enabled Customer](tutorial-config-create-customer).
+
+## Related Topics
+
+* [About Channels and Releases](/vendor/releases-about)
+* [Configuring the HelmChart Custom Resource](/vendor/helm-native-v2-using)
+* [Config Custom Resource](/reference/custom-resource-config)
+* [Manipulating Helm Chart Values with KOTS](/vendor/helm-optional-value-keys)
\ No newline at end of file
diff --git a/docs/reference/tutorial-config-get-chart.md b/docs/reference/tutorial-config-get-chart.md
new file mode 100644
index 0000000000..7fb34a7821
--- /dev/null
+++ b/docs/reference/tutorial-config-get-chart.md
@@ -0,0 +1,119 @@
+# Step 1: Get the Sample Chart and Test
+
+To begin, get the sample Grafana Helm chart from Bitnami, install the chart in your cluster using the Helm CLI, and then uninstall. The purpose of this step is to confirm that you can successfully install and access the application before adding the chart to a release in the Replicated vendor platform.
+
+To get the sample Grafana chart and test installation:
+
+1. Run the following command to pull and untar version 9.6.5 of the Bitnami Grafana Helm chart:
+
+ ```
+ helm pull --untar oci://registry-1.docker.io/bitnamicharts/grafana --version 9.6.5
+ ```
+ For more information about this chart, see the [bitnami/grafana](https://github.com/bitnami/charts/tree/main/bitnami/grafana) repository in GitHub.
+
+1. Change to the new `grafana` directory that was created:
+ ```
+ cd grafana
+ ```
+1. View the files in the directory:
+ ```
+ ls
+ ```
+ The directory contains the following files:
+ ```
+ Chart.lock Chart.yaml README.md charts templates values.yaml
+ ```
+1. Install the chart in your cluster:
+
+ ```
+ helm install grafana . --namespace grafana --create-namespace
+ ```
+ To view the full installation instructions from Bitnami, see [Installing the Chart](https://github.com/bitnami/charts/blob/main/bitnami/grafana/README.md#installing-the-chart) in the `bitnami/grafana` repository.
+
+ After running the installation command, the following output is displayed:
+
+ ```
+ NAME: grafana
+ LAST DEPLOYED: Thu Dec 14 14:54:50 2023
+ NAMESPACE: grafana
+ STATUS: deployed
+ REVISION: 1
+ TEST SUITE: None
+ NOTES:
+ CHART NAME: grafana
+ CHART VERSION: 9.6.5
+ APP VERSION: 10.2.2
+
+ ** Please be patient while the chart is being deployed **
+
+ 1. Get the application URL by running these commands:
+ echo "Browse to http://127.0.0.1:8080"
+ kubectl port-forward svc/grafana 8080:3000 &
+
+ 2. Get the admin credentials:
+
+ echo "User: admin"
+ echo "Password: $(kubectl get secret grafana-admin --namespace grafana -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"
+ # Note: Do not include grafana.validateValues.database here. See https://github.com/bitnami/charts/issues/20629
+ ```
+
+1. Watch the `grafana` Deployment until it is ready:
+
+ ```
+ kubectl get deploy grafana --namespace grafana --watch
+ ```
+
+1. When the Deployment is created, run the commands provided in the output of the installation command to get the Grafana login credentials:
+
+ ```
+ echo "User: admin"
+ echo "Password: $(kubectl get secret grafana-admin --namespace grafana -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"
+ ```
+
+1. Run the commands provided in the ouptut of the installation command to get the Grafana URL:
+
+ ```
+ echo "Browse to http://127.0.0.1:8080"
+ kubectl port-forward svc/grafana 8080:3000 --namespace grafana
+ ```
+
+ :::note
+ Include `--namespace grafana` in the `kubectl port-forward` command.
+ :::
+
+1. In a browser, go to the URL to open the Grafana login page:
+
+
+
+ [View a larger version of this image](/images/grafana-login.png)
+
+1. Log in using the credentials provided to open the Grafana dashboard:
+
+
+
+ [View a larger version of this image](/images/grafana-dashboard.png)
+
+1. Uninstall the Helm chart:
+
+ ```
+ helm uninstall grafana --namespace grafana
+ ```
+ This command removes all the Kubernetes resources associated with the chart and uninstalls the `grafana` release.
+
+1. Delete the namespace:
+
+ ```
+ kubectl delete namespace grafana
+ ```
+
+## Next Step
+
+Log in to the Vendor Portal and create an application. See [Step 2: Create an Application](tutorial-config-create-app).
+
+## Related Topics
+
+* [Helm Install](https://helm.sh/docs/helm/helm_install/)
+* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
+* [Helm Create](https://helm.sh/docs/helm/helm_create/)
+* [Helm Package](https://helm.sh/docs/helm/helm_package/)
+* [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea)
\ No newline at end of file
diff --git a/docs/reference/tutorial-config-install-kots.md b/docs/reference/tutorial-config-install-kots.md
new file mode 100644
index 0000000000..d963d3ae15
--- /dev/null
+++ b/docs/reference/tutorial-config-install-kots.md
@@ -0,0 +1,154 @@
+# Step 6: Install the Release with KOTS
+
+Next, get the KOTS installation command from the Unstable channel in the Vendor Portal and then install the release using the customer license that you downloaded.
+
+As part of installation, you will set Grafana login credentials on the KOTS Admin Console configuration page.
+
+To install the release with KOTS:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), go to **Channels**. From the **Unstable** channel card, under **Install**, copy the **KOTS Install** command.
+
+ 
+
+ [View a larger version of this image](/images/grafana-unstable-channel.png)
+
+1. On the command line, run the **KOTS Install** command that you copied:
+
+ ```bash
+ curl https://kots.io/install | bash
+ kubectl kots install $REPLICATED_APP/unstable
+ ```
+
+ This installs the latest version of the KOTS CLI and the Admin Console. The Admin Console provides a user interface where you can upload the customer license file and deploy the application.
+
+ For additional KOTS CLI installation options, including how to install without root access, see [Installing the KOTS CLI](/reference/kots-cli-getting-started).
+
+ :::note
+ KOTS v1.104.0 or later is required to deploy the Replicated SDK. You can verify the version of KOTS installed with `kubectl kots version`.
+ :::
+
+1. Complete the installation command prompts:
+
+ 1. For `Enter the namespace to deploy to`, enter `grafana`.
+
+ 1. For `Enter a new password to be used for the Admin Console`, provide a password to access the Admin Console.
+
+ When the Admin Console is ready, the command prints the URL where you can access the Admin Console. At this point, the KOTS CLI is installed and the Admin Console is running, but the application is not yet deployed.
+
+ **Example output:**
+
+ ```bash
+ Enter the namespace to deploy to: grafana
+ • Deploying Admin Console
+ • Creating namespace ✓
+ • Waiting for datastore to be ready ✓
+ Enter a new password for the Admin Console (6+ characters): ••••••••
+ • Waiting for Admin Console to be ready ✓
+
+ • Press Ctrl+C to exit
+ • Go to http://localhost:8800 to access the Admin Console
+ ```
+
+1. With the port forward running, go to `http://localhost:8800` in a browser to access the Admin Console.
+
+1. On the login page, enter the password that you created for the Admin Console.
+
+1. On the license page, select the license file that you downloaded previously and click **Upload license**.
+
+1. On the **Configure Grafana** page, enter a username and password. You will use these credentials to log in to Grafana.
+
+ 
+
+ [View a larger version of this image](/images/grafana-config.png)
+
+1. Click **Continue**.
+
+ The Admin Console dashboard opens. The application status changes from Missing to Unavailable while the `grafana` Deployment is being created.
+
+ 
+
+ [View a larger version of this image](/images/grafana-unavailable.png)
+
+1. On the command line, press Ctrl+C to exit the port forward.
+
+1. Watch for the `grafana` Deployment to become ready:
+
+ ```
+ kubectl get deploy grafana --namespace grafana --watch
+ ```
+
+1. After the Deployment is ready, run the following command to confirm that the `grafana-admin` Secret was updated with the new password that you created on the **Configure Grafana** page:
+
+ ```
+ echo "Password: $(kubectl get secret grafana-admin --namespace grafana -o jsonpath="{.data.GF_SECURITY_ADMIN_PASSWORD}" | base64 -d)"
+ ```
+
+ The ouput of this command displays the password that you created.
+
+1. Start the port foward again to access the Admin Console:
+
+ ```
+ kubectl kots admin-console --namespace grafana
+ ```
+
+1. Go to `http://localhost:8800` to open the Admin Console.
+
+ On the Admin Console dashboard, the application status is now displayed as Ready:
+
+ 
+
+ [View a larger version of this image](/images/grafana-ready.png)
+
+1. Click **Open App** to open the Grafana login page in a browser.
+
+
+
+ [View a larger version of this image](/images/grafana-login.png)
+
+1. On the Grafana login page, enter the username and password that you created on the **Configure Grafana** page. Confirm that you can log in to the application to access the Grafana dashboard:
+
+
+
+ [View a larger version of this image](/images/grafana-dashboard.png)
+
+1. On the command line, press Ctrl+C to exit the port forward.
+
+1. Uninstall the Grafana application from your cluster:
+
+ ```bash
+ kubectl kots remove $REPLICATED_APP --namespace grafana --undeploy
+ ```
+ **Example output**:
+ ```
+ • Removing application grafana-python reference from Admin Console and deleting associated resources from the cluster ✓
+ • Application grafana-python has been removed
+ ```
+
+1. Remove the Admin Console from the cluster:
+
+ 1. Delete the namespace where the Admin Console is installed:
+
+ ```
+ kubectl delete namespace grafana
+ ```
+ 1. Delete the Admin Console ClusterRole and ClusterRoleBinding:
+
+ ```
+ kubectl delete clusterrole kotsadm-role
+ ```
+ ```
+ kubectl delete clusterrolebinding kotsadm-rolebinding
+ ```
+
+## Next Step
+
+Congratulations! As part of this tutorial, you used the KOTS Config custom resource to define a configuration page in the Admin Console. You also used the KOTS HelmChart custom resource and KOTS ConfigOption template function to override the default Grafana login credentials with a user-supplied username and password.
+
+To learn more about how to customize the Config custom resource to create configuration fields for your application, see [Config](/reference/custom-resource-config).
+
+## Related Topics
+
+* [kots install](/reference/kots-cli-install/)
+* [Installing the KOTS CLI](/reference/kots-cli-getting-started/)
+* [Installing an Application](/enterprise/installing-overview)
+* [Deleting the Admin Console and Removing Applications](/enterprise/delete-admin-console)
diff --git a/docs/reference/tutorial-config-package-chart.md b/docs/reference/tutorial-config-package-chart.md
new file mode 100644
index 0000000000..eb29af6125
--- /dev/null
+++ b/docs/reference/tutorial-config-package-chart.md
@@ -0,0 +1,30 @@
+import DependencyYaml from "../partials/replicated-sdk/_dependency-yaml.mdx"
+import UnauthorizedError from "../partials/replicated-sdk/_401-unauthorized.mdx"
+
+# Step 3: Package the Helm Chart
+
+Next, add the Replicated SDK as a dependency of the Helm chart and then package the chart into a `.tgz` archive. The purpose of this step is to prepare the Helm chart to be added to a release.
+
+To add the Replicated SDK and package the Helm chart:
+
+1. In your local file system, go to the `grafana` directory that was created as part of [Step 1: Get the Sample Chart and Test](tutorial-config-get-chart).
+
+1. In the `Chart.yaml` file, add the Replicated SDK as a dependency:
+
+
+
+[View a larger version of this image](/images/add-external-registry.png)
+
+The values for the fields are:
+
+**Endpoint:**
+Enter the same URL used to log in to ECR.
+For example, to link to the same registry as the one in the section, we would enter *4999999999999.dkr.ecr.us-east-2.amazonaws.com*.
+
+**Username:**
+Enter the AWS Access Key ID for the user created in the [Setting Up the Service Account User](#setting-up-the-service-account-user) section.
+
+**Password:**
+Enter the AWS Secret Key for the user created in the [Setting Up the Service Account User](#setting-up-the-service-account-user) section.
+
+* * *
+
+## 3. Update Definition Files
+
+Last step is to update our definition manifest to pull the image from the ECR repository.
+To do this, we'll update the `deployment.yaml` file by adding the ECR registry URL to the `image` value.
+Below is an example using the registry URL used in this guide.
+
+```diff
+ spec:
+ containers:
+ - name: nginx
+- image: nginx
++ image: 4999999999999.dkr.ecr.us-east-2.amazonaws.com/demo-apps/nginx
+ envFrom:
+```
+
+Save your changes and create the new release and promote it to the *Unstable* channel.
+
+* * *
+
+## 4. Install the New Version
+
+To deploy the new version of the application, go back to the admin console and select the *Version History* tab.
+Click on **Check for Updates** and then **Deploy** when the new version is listed.
+To confirm that the new version was in fact installed, it should look like the screenshot below.
+
+
+
+Now, we can inspect to see the changes in the definition files.
+Looking at the `deployment.yaml` upstream file, we see the image path as we set it in the [Update Definition Files](#3-update-definition-files) section.
+
+
+
+Because KOTS is able to detect that it cannot pull this image anonymously, it then tries to proxy the private registries configured. Looking at the `kustomization.yaml` downstream file we can see that the image path is changed to use the Replicated proxy.
+
+
+
+The install of the new version should have created a new pod. If we run `kubectl describe pod` on the new NGINX pod, we can confirm that the image was in fact pulled from the ECR repository.
+
+
+
+* * *
+
+## Related Topics
+
+- [Connecting to an External Registry](packaging-private-images/)
+
+- [Replicated Community Thread on AWS Roles and Permissions](https://help.replicated.com/community/t/what-are-the-minimal-aws-iam-permissions-needed-to-proxy-images-from-elastic-container-registry-ecr/267)
+
+- [AWS ECR Managed Policies Documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecr_managed_policies.html)
diff --git a/docs/reference/tutorial-embedded-cluster-create-app.mdx b/docs/reference/tutorial-embedded-cluster-create-app.mdx
new file mode 100644
index 0000000000..b2eb1fab13
--- /dev/null
+++ b/docs/reference/tutorial-embedded-cluster-create-app.mdx
@@ -0,0 +1,63 @@
+# Step 1: Create an Application
+
+To begin, install the Replicated CLI and create an application in the Replicated Vendor Portal.
+
+An _application_ is an object that has its own customers, channels, releases, license fields, and more. A single team can have more than one application. It is common for teams to have multiple applications for the purpose of onboarding, testing, and iterating.
+
+To create an application:
+
+1. Install the Replicated CLI:
+
+ ```
+ brew install replicatedhq/replicated/cli
+ ```
+ For more installation options, see [Installing the Replicated CLI](/reference/replicated-cli-installing).
+
+1. Authorize the Replicated CLI:
+
+ ```
+ replicated login
+ ```
+ In the browser window that opens, complete the prompts to log in to your vendor account and authorize the CLI.
+
+1. Create an application named `Gitea`:
+
+ ```
+ replicated app create Gitea
+ ```
+
+1. Set the `REPLICATED_APP` environment variable to the application that you created. This allows you to interact with the application using the Replicated CLI without needing to use the `--app` flag with every command:
+
+ 1. Get the slug for the application that you created:
+
+ ```
+ replicated app ls
+ ```
+ **Example output**:
+ ```
+ ID NAME SLUG SCHEDULER
+ 2WthxUIfGT13RlrsUx9HR7So8bR Gitea gitea-kite kots
+ ```
+ In the example above, the application slug is `gitea-kite`.
+
+ :::note
+ The application _slug_ is a unique string that is generated based on the application name. You can use the application slug to interact with the application through the Replicated CLI and the Vendor API v3. The application name and slug are often different from one another because it is possible to create more than one application with the same name.
+ :::
+
+ 1. Set the `REPLICATED_APP` environment variable to the application slug.
+
+ **Example:**
+
+ ```
+ export REPLICATED_APP=gitea-kite
+ ```
+
+## Next Step
+
+Add the Replicated SDK to the Helm chart and package the chart to an archive. See [Step 2: Package the Helm Chart](tutorial-embedded-cluster-package-chart).
+
+## Related Topics
+
+* [Create an Application](/vendor/vendor-portal-manage-app#create-an-application)
+* [Installing the Replicated CLI](/reference/replicated-cli-installing)
+* [replicated app create](/reference/replicated-cli-app-create)
\ No newline at end of file
diff --git a/docs/reference/tutorial-embedded-cluster-create-customer.mdx b/docs/reference/tutorial-embedded-cluster-create-customer.mdx
new file mode 100644
index 0000000000..4cac53cfbb
--- /dev/null
+++ b/docs/reference/tutorial-embedded-cluster-create-customer.mdx
@@ -0,0 +1,34 @@
+# Step 4: Create an Embedded Cluster-Enabled Customer
+
+After promoting the release, create a customer with the Replicated KOTS and Embedded Cluster entitlements so that you can install the release with Embedded Cluster. A _customer_ represents a single licensed user of your application.
+
+To create a customer:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
+
+ The **Create a new customer** page opens:
+
+ 
+
+ [View a larger version of this image](/images/create-customer.png)
+
+1. For **Customer name**, enter a name for the customer. For example, `Example Customer`.
+
+1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
+
+1. For **License type**, select **Development**.
+
+1. For **License options**, enable the following entitlements:
+ * **KOTS Install Enabled**
+ * **Embedded Cluster Enabled**
+
+1. Click **Save Changes**.
+
+## Next Step
+
+Get the Embedded Cluster installation commands and install. See [Step 5: Install the Release on a VM](tutorial-embedded-cluster-install).
+
+## Related Topics
+
+* [About Customers](/vendor/licenses-about)
+* [Creating and Managing Customers](/vendor/releases-creating-customer)
\ No newline at end of file
diff --git a/docs/reference/tutorial-embedded-cluster-create-release.mdx b/docs/reference/tutorial-embedded-cluster-create-release.mdx
new file mode 100644
index 0000000000..b5132b4171
--- /dev/null
+++ b/docs/reference/tutorial-embedded-cluster-create-release.mdx
@@ -0,0 +1,132 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import HelmChartCr from "../partials/getting-started/_gitea-helmchart-cr-ec.mdx"
+import KotsCr from "../partials/getting-started/_gitea-kots-app-cr-ec.mdx"
+import K8sCr from "../partials/getting-started/_gitea-k8s-app-cr.mdx"
+import EcCr from "../partials/embedded-cluster/_ec-config.mdx"
+
+# Step 3: Add the Chart Archive to a Release
+
+Next, add the Helm chart archive to a new release for the application in the Replicated Vendor Portal. The purpose of this step is to configure a release that supports installation with Replicated Embedded Cluster.
+
+A _release_ represents a single version of your application and contains your application files. Each release is promoted to one or more _channels_. Channels provide a way to progress releases through the software development lifecycle: from internal testing, to sharing with early-adopters, and finally to making the release generally available.
+
+To create a release:
+
+1. In the `gitea` directory, create a subdirectory named `manifests`:
+
+ ```
+ mkdir manifests
+ ```
+
+ You will add the files required to support installation with Replicated KOTS to this subdirectory.
+
+1. Move the Helm chart archive that you created to `manifests`:
+
+ ```
+ mv gitea-1.0.6.tgz manifests
+ ```
+
+1. In `manifests`, create the YAML manifests required by KOTS:
+ ```
+ cd manifests
+ ```
+ ```
+ touch gitea.yaml kots-app.yaml k8s-app.yaml embedded-cluster.yaml
+ ```
+
+1. In each of the files that you created, paste the corresponding YAML provided in the tabs below:
+
+ The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. The optionalValues field sets the specified Helm values when a given conditional statement evaluates to true. In this case, if the application is installed with Embedded Cluster, then the Gitea service type is set to `NodePort` and the node port is set to `"32000"`. This will allow Gitea to be accessed from the local machine after deployment.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and adds the port where the Gitea service can be accessed so that the user can open the application after installation.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the service port defined in the KOTS Application custom resource.
+To install your application with Embedded Cluster, an Embedded Cluster Config must be present in the release. At minimum, the Embedded Cluster Config sets the version of Embedded Cluster that will be installed. You can also define several characteristics about the cluster.
+
+
+ [View a larger version of this image](/images/release-promote.png)
+
+## Next Step
+
+Create a customer with the Embedded Cluster entitlement so that you can install the release using Embedded Cluster. See [Step 4: Create an Embedded Cluster-Enabled Customer](tutorial-embedded-cluster-create-customer).
+
+## Related Topics
+
+* [About Channels and Releases](/vendor/releases-about)
+* [Configuring the HelmChart Custom Resource](/vendor/helm-native-v2-using)
+* [Embedded Cluster Config](/reference/embedded-config)
+* [Setting Helm Values with KOTS](/vendor/helm-optional-value-keys)
\ No newline at end of file
diff --git a/docs/reference/tutorial-embedded-cluster-install.mdx b/docs/reference/tutorial-embedded-cluster-install.mdx
new file mode 100644
index 0000000000..f33f1b01d2
--- /dev/null
+++ b/docs/reference/tutorial-embedded-cluster-install.mdx
@@ -0,0 +1,111 @@
+import KotsVerReq from "../partials/replicated-sdk/_kots-version-req.mdx"
+
+# Step 5: Install the Release on a VM
+
+Next, get the customer-specific Embedded Cluster installation commands and then install the release on a Linux VM.
+
+To install the release with Embedded Cluster:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), go to **Customers**. Click on the name of the customer you created.
+
+1. Click **Install instructions > Embedded cluster**.
+
+
+
+ [View a larger version of this image](/images/customer-install-instructions-dropdown.png)
+
+ The **Embedded cluster install instructions** dialog opens.
+
+
+
+ [View a larger version of this image](/images/embedded-cluster-install-dialog-latest.png)
+
+1. On the command line, SSH onto your Linux VM.
+
+1. Run the first command in the **Embedded cluster install instructions** dialog to download the latest release.
+
+1. Run the second command to extract the release.
+
+1. Run the third command to install the release.
+
+1. When prompted, enter a password for accessing the KOTS Admin Console.
+
+ The installation command takes a few minutes to complete.
+
+1. When the installation command completes, go to the URL provided in the output to log in to the Admin Console.
+
+ **Example output:**
+
+ ```bash
+ ✔ Host files materialized
+ ? Enter an Admin Console password: ********
+ ? Confirm password: ********
+ ✔ Node installation finished
+ ✔ Storage is ready!
+ ✔ Embedded Cluster Operator is ready!
+ ✔ Admin Console is ready!
+ ✔ Finished!
+ Visit the admin console to configure and install gitea-kite: http://104.155.145.60:30000
+ ```
+
+ At this point, the cluster is provisioned and the KOTS Admin Console is deployed, but the application is not yet installed.
+
+1. Bypass the browser TLS warning by clicking **Continue to Setup**.
+
+1. Click **Advanced > Proceed**.
+
+1. On the **HTTPS for the Gitea Admin Console** page, select **Self-signed** and click **Continue**.
+
+1. On the login page, enter the Admin Console password that you created during installation and click **Log in**.
+
+1. On the **Nodes** page, you can view details about the VM where you installed, including its node role, status, CPU, and memory. Users can also optionally add additional nodes on this page before deploying the application. Click **Continue**.
+
+ The Admin Console dashboard opens.
+
+1. In the **Version** section, for version `0.1.0`, click **Deploy** then **Yes, Deploy**.
+
+ The application status changes from Missing to Unavailable while the `gitea` Deployment is being created.
+
+1. After a few minutes when the application status is Ready, click **Open App** to view the Gitea application in a browser:
+
+ 
+
+ [View a larger version of this image](/images/gitea-ec-ready.png)
+
+
+
+ [View a larger version of this image](/images/gitea-app.png)
+
+1. In another browser window, open the [Vendor Portal](https://vendor.replicated.com/) and go to **Customers**. Select the customer that you created.
+
+ On the **Reporting** page for the customer, you can see details about the customer's license and installed instances:
+
+ 
+
+ [View a larger version of this image](/images/gitea-customer-reporting-ec.png)
+
+1. On the **Reporting** page, under **Instances**, click on the instance that you just installed to open the instance details page.
+
+ On the instance details page, you can see additional insights such as the version of Embedded Cluster that is running, instance status and uptime, and more:
+
+ 
+
+ [View a larger version of this image](/images/gitea-instance-insights-ec.png)
+
+1. (Optional) Reset the node to remove the cluster and the application from the node. This is useful for iteration and development so that you can reset a machine and reuse it instead of having to procure another machine.
+
+ ```bash
+ sudo ./APP_SLUG reset --reboot
+ ```
+ Where `APP_SLUG` is the unique slug for the application that you created. You can find the appication slug by running `replicated app ls` on the command line on your local machine.
+
+## Summary
+
+Congratulations! As part of this tutorial, you created a release in the Replicated Vendor Portal and installed the release with Replicated Embedded Cluster in a VM. To learn more about Embedded Cluster, see [Embedded Cluster Overview](embedded-overview).
+
+## Related Topics
+
+* [Embedded Cluster Overview](embedded-overview)
+* [Customer Reporting](/vendor/customer-reporting)
+* [Instance Details](/vendor/instance-insights-details)
+* [Reset a Node](/vendor/embedded-using#reset-a-node)
\ No newline at end of file
diff --git a/docs/reference/tutorial-embedded-cluster-package-chart.mdx b/docs/reference/tutorial-embedded-cluster-package-chart.mdx
new file mode 100644
index 0000000000..ab50d16d32
--- /dev/null
+++ b/docs/reference/tutorial-embedded-cluster-package-chart.mdx
@@ -0,0 +1,51 @@
+import DependencyYaml from "../partials/replicated-sdk/_dependency-yaml.mdx"
+import UnauthorizedError from "../partials/replicated-sdk/_401-unauthorized.mdx"
+
+# Step 2: Package the Gitea Helm Chart
+
+Next, get the sample Gitea Helm chart from Bitnami. Add the Replicated SDK as a dependency of the chart, then package the chart into a `.tgz` archive. The purpose of this step is to prepare the Helm chart to be added to a release.
+
+The Replicated SDK is a Helm chart that can be optionally added as a dependency of your application Helm chart. The SDK is installed as a small service running alongside your application, and provides an in-cluster API that you can use to embed Replicated features into your application. Additionally, the Replicated SDK provides access to insights and telemetry for instances of your application installed with the Helm CLI.
+
+To add the Replicated SDK and package the Helm chart:
+
+1. Run the following command to pull and untar version 1.0.6 of the Bitnami Gitea Helm chart:
+
+ ```
+ helm pull --untar oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
+ ```
+ For more information about this chart, see the [bitnami/gitea](https://github.com/bitnami/charts/tree/main/bitnami/gitea) repository in GitHub.
+
+1. Change to the new `gitea` directory that was created:
+ ```
+ cd gitea
+ ```
+1. View the files in the directory:
+ ```
+ ls
+ ```
+ The directory contains the following files:
+ ```
+ Chart.lock Chart.yaml README.md charts templates values.yaml
+ ```
+
+1. In the `Chart.yaml` file, add the Replicated SDK as a dependency:
+
+ The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The KOTS Application custom resource enables features in the KOTS Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Gitea application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the KOTS Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
+
+
+ [View a larger version of this image](/images/release-promote.png)
+
+## Next Step
+
+Create a customer with the KOTS entitlement so that you can install the release in your cluster using Replicated KOTS. See [Step 5: Create a KOTS-Enabled Customer](tutorial-kots-helm-create-customer).
+
+## Related Topics
+
+* [About Channels and Releases](/vendor/releases-about)
+* [Configuring the HelmChart Custom Resource](/vendor/helm-native-v2-using)
\ No newline at end of file
diff --git a/docs/reference/tutorial-kots-helm-get-chart.md b/docs/reference/tutorial-kots-helm-get-chart.md
new file mode 100644
index 0000000000..7239e490d9
--- /dev/null
+++ b/docs/reference/tutorial-kots-helm-get-chart.md
@@ -0,0 +1,107 @@
+# Step 1: Get the Sample Chart and Test
+
+To begin, get the sample Gitea Helm chart from Bitnami, install the chart in your cluster using the Helm CLI, and then uninstall. The purpose of this step is to confirm that you can successfully install and access the application before adding the chart to a release in the Replicated Vendor Portal.
+
+To get the sample Gitea Helm chart and test installation:
+
+1. Run the following command to pull and untar version 1.0.6 of the Bitnami Gitea Helm chart:
+
+ ```
+ helm pull --untar oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
+ ```
+ For more information about this chart, see the [bitnami/gitea](https://github.com/bitnami/charts/tree/main/bitnami/gitea) repository in GitHub.
+
+1. Change to the new `gitea` directory that was created:
+ ```
+ cd gitea
+ ```
+1. View the files in the directory:
+ ```
+ ls
+ ```
+ The directory contains the following files:
+ ```
+ Chart.lock Chart.yaml README.md charts templates values.yaml
+ ```
+1. Install the Gitea chart in your cluster:
+
+ ```
+ helm install gitea . --namespace gitea --create-namespace
+ ```
+ To view the full installation instructions from Bitnami, see [Installing the Chart](https://github.com/bitnami/charts/blob/main/bitnami/gitea/README.md#installing-the-chart) in the `bitnami/gitea` repository.
+
+ When the chart is installed, the following output is displayed:
+
+ ```
+ NAME: gitea
+ LAST DEPLOYED: Tue Oct 24 12:44:55 2023
+ NAMESPACE: gitea
+ STATUS: deployed
+ REVISION: 1
+ TEST SUITE: None
+ NOTES:
+ CHART NAME: gitea
+ CHART VERSION: 1.0.6
+ APP VERSION: 1.20.5
+
+ ** Please be patient while the chart is being deployed **
+
+ 1. Get the Gitea URL:
+
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ Watch the status with: 'kubectl get svc --namespace gitea -w gitea'
+
+ export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
+ echo "Gitea URL: http://$SERVICE_IP/"
+
+ WARNING: You did not specify a Root URL for Gitea. The rendered URLs in Gitea may not show correctly. In order to set a root URL use the rootURL value.
+
+ 2. Get your Gitea login credentials by running:
+
+ echo Username: bn_user
+ echo Password: $(kubectl get secret --namespace gitea gitea -o jsonpath="{.data.admin-password}" | base64 -d)
+ ```
+
+1. Watch the `gitea` LoadBalancer service until an external IP is available:
+
+ ```
+ kubectl get svc gitea --namespace gitea --watch
+ ```
+
+1. When the external IP for the `gitea` LoadBalancer service is available, run the commands provided in the output of the installation command to get the Gitea URL:
+
+ ```
+ export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
+ echo "Gitea URL: http://$SERVICE_IP/"
+ ```
+
+1. In a browser, go to the Gitea URL to confirm that you can see the welcome page for the application:
+
+
+
+ [View a larger version of this image](/images/gitea-app.png)
+
+1. Uninstall the Helm chart:
+
+ ```
+ helm uninstall gitea --namespace gitea
+ ```
+ This command removes all the Kubernetes components associated with the chart and uninstalls the `gitea` release.
+
+1. Delete the namespace:
+
+ ```
+ kubectl delete namespace gitea
+ ```
+
+## Next Step
+
+Log in to the Vendor Portal and create an application. See [Step 2: Create an Application](tutorial-kots-helm-create-app).
+
+## Related Topics
+
+* [Helm Install](https://helm.sh/docs/helm/helm_install/)
+* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
+* [Helm Create](https://helm.sh/docs/helm/helm_create/)
+* [Helm Package](https://helm.sh/docs/helm/helm_package/)
+* [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea)
\ No newline at end of file
diff --git a/docs/reference/tutorial-kots-helm-install-helm.md b/docs/reference/tutorial-kots-helm-install-helm.md
new file mode 100644
index 0000000000..a0a29c94cf
--- /dev/null
+++ b/docs/reference/tutorial-kots-helm-install-helm.md
@@ -0,0 +1,118 @@
+# Step 7: Install the Release with the Helm CLI
+
+Next, install the same release using the Helm CLI. All releases that contain one or more Helm charts can be installed with the Helm CLI.
+
+All Helm charts included in a release are automatically pushed to the Replicated registry when the release is promoted to a channel. Helm CLI installations require that the customer has a valid email address to authenticate with the Replicated registry.
+
+To install the release with the Helm CLI:
+
+1. Create a new customer to test the Helm CLI installation:
+
+ 1. In the [Vendor Portal](https://vendor.replicated.com), click **Customers > Create customer**.
+
+ The **Create a new customer** page opens:
+
+ 
+
+ [View a larger version of this image](/images/create-customer.png)
+
+ 1. For **Customer name**, enter a name for the customer. For example, `Helm Customer`.
+
+ 1. For **Channel**, select **Unstable**. This allows the customer to install releases promoted to the Unstable channel.
+
+ 1. For **Customer email**, enter the email address for the customer. The customer email address is required to install the application with the Helm CLI. This email address is never used send emails to customers.
+
+ 1. For **License type**, select Trial.
+
+ 1. (Optional) For **License options**, _disable_ the **KOTS Install Enabled** entitlement.
+
+ 1. Click **Save Changes**.
+
+1. On the **Manage customer** page for the new customer, click **Helm install instructions**.
+
+ 
+
+ [View a larger version of this image](/images/tutorial-gitea-helm-customer-install-button.png)
+
+ You will use the instructions provided in the **Helm install instructions** dialog to install the chart.
+
+1. Before you run the first command in the **Helm install instructions** dialog, create a `gitea` namespace for the installation:
+
+ ```
+ kubectl create namespace gitea
+ ```
+
+1. Update the current kubectl context to target the new `gitea` namespace. This ensures that the chart is installed in the `gitea` namespace without requiring you to set the `--namespace` flag with the `helm install` command:
+
+ ```
+ kubectl config set-context --namespace=gitea --current
+ ```
+
+1. Run the commands in the provided in the **Helm install instructions** dialog to log in to the registry and install the Helm chart.
+
+
+
+ [View a larger version of this image](/images/tutorial-gitea-helm-install-instructions.png)
+
+ :::note
+ You can ignore the **No preflight checks found** warning for the purpose of this tutorial. This warning appears because there are no specifications for preflight checks in the Helm chart archive.
+ :::
+
+1. After the installation command completes, you can see that both the `gitea` Deployment and the Replicated SDK `replicated` Deployment were created:
+
+ ```
+ kubectl get deploy
+ ```
+ **Example output:**
+ ```
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ gitea 0/1 1 0 35s
+ replicated 1/1 1 1 35s
+ ```
+
+1. Watch the `gitea` LoadBalancer service until an external IP is available:
+
+ ```
+ kubectl get svc gitea --watch
+ ```
+
+1. After an external IP address is available for the `gitea` LoadBalancer service, follow the instructions in the output of the installation command to get the Gitea URL and then confirm that you can open the application in a browser.
+
+1. In another browser window, open the [Vendor Portal](https://vendor.replicated.com/) and go to **Customers**. Select the customer that you created for the Helm CLI installation.
+
+ On the **Reporting** page for the customer, because the Replicated SDK was installed alongside the Gitea Helm chart, you can see details about the customer's license and installed instances:
+
+ 
+
+ [View a larger version of this image](/images/tutorial-gitea-helm-reporting.png)
+
+1. On the **Reporting** page, under **Instances**, click on the instance that you just installed to open the instance details page.
+
+ On the instance details page, you can see additional insights such as the cluster where the application is installed, the version of the Replicated SDK running in the cluster, instance status and uptime, and more:
+
+ 
+
+ [View a larger version of this image](/images/tutorial-gitea-helm-instance.png)
+
+1. Uninstall the Helm chart and the Replicated SDK:
+
+ ```
+ helm uninstall gitea
+ ```
+
+1. Delete the `gitea` namespace:
+
+ ```
+ kubectl delete namespace gitea
+ ```
+
+## Next Step
+
+Congratulations! As part of this tutorial, you created a release in the Replicated Vendor Portal and installed the release with both KOTS and the Helm CLI.
+
+## Related Topics
+
+* [Installing with Helm](/vendor/install-with-helm)
+* [About the Replicated SDK](/vendor/replicated-sdk-overview)
+* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
+* [Helm Delete](https://helm.sh/docs/helm/helm_delete/)
\ No newline at end of file
diff --git a/docs/reference/tutorial-kots-helm-install-kots.md b/docs/reference/tutorial-kots-helm-install-kots.md
new file mode 100644
index 0000000000..bdfefabc5f
--- /dev/null
+++ b/docs/reference/tutorial-kots-helm-install-kots.md
@@ -0,0 +1,147 @@
+import KotsVerReq from "../partials/replicated-sdk/_kots-version-req.mdx"
+
+# Step 6: Install the Release with KOTS
+
+Next, get the KOTS installation command from the Unstable channel in the Vendor Portal and then install the release using the customer license that you downloaded.
+
+To install the release with KOTS:
+
+1. In the [Vendor Portal](https://vendor.replicated.com), go to **Channels**. From the **Unstable** channel card, under **Install**, copy the **KOTS Install** command.
+
+ 
+
+ [View a larger version of this image](/images/helm-tutorial-unstable-kots-install-command.png)
+
+1. On the command line, run the **KOTS Install** command that you copied:
+
+ ```bash
+ curl https://kots.io/install | bash
+ kubectl kots install $REPLICATED_APP/unstable
+ ```
+
+ This installs the latest version of the KOTS CLI and the Replicated KOTS Admin Console. The Admin Console provides a user interface where you can upload the customer license file and deploy the application.
+
+ For additional KOTS CLI installation options, including how to install without root access, see [Installing the KOTS CLI](/reference/kots-cli-getting-started).
+
+ :::note
+
+
+ [View a larger version of this image](/images/release-promote.png)
+
+1. In the dialog, for **Which channels you would like to promote this release to?**, select **Unstable**. Unstable is a default channel that is intended for use with internal testing.
+
+1. For **Version label**, open the dropdown and select **1.0.6**.
+
+1. Click **Promote**.
+
+
+## Next Step
+
+Create a customer so that you can install the release in a development environment. See [Create a Customer](tutorial-preflight-helm-create-customer).
+
+## Related Topics
+
+* [About Channels and Releases](/vendor/releases-about)
+* [Managing Releases with the CLI](/vendor/releases-creating-cli)
\ No newline at end of file
diff --git a/docs/reference/tutorial-preflight-helm-get-chart.mdx b/docs/reference/tutorial-preflight-helm-get-chart.mdx
new file mode 100644
index 0000000000..1532955e77
--- /dev/null
+++ b/docs/reference/tutorial-preflight-helm-get-chart.mdx
@@ -0,0 +1,115 @@
+# Step 1: Get the Sample Chart and Test
+
+To begin, get the sample Gitea Helm chart from Bitnami, install the chart in your cluster using the Helm CLI, and then uninstall. The purpose of this step is to confirm that you can successfully install the application before adding preflight checks to the chart.
+
+To get the sample Gitea Helm chart and test installation:
+
+1. Run the following command to pull and untar version 1.0.6 of the Bitnami Gitea Helm chart:
+
+ ```
+ helm pull --untar oci://registry-1.docker.io/bitnamicharts/gitea --version 1.0.6
+ ```
+ For more information about this chart, see the [bitnami/gitea](https://github.com/bitnami/charts/tree/main/bitnami/gitea) repository in GitHub.
+
+1. Change to the new `gitea` directory that was created:
+ ```
+ cd gitea
+ ```
+1. View the files in the directory:
+ ```
+ ls
+ ```
+ The directory contains the following files:
+ ```
+ Chart.lock Chart.yaml README.md charts templates values.yaml
+ ```
+1. Install the Gitea chart in your cluster:
+
+ ```
+ helm install gitea . --namespace gitea --create-namespace
+ ```
+ To view the full installation instructions from Bitnami, see [Installing the Chart](https://github.com/bitnami/charts/blob/main/bitnami/gitea/README.md#installing-the-chart) in the `bitnami/gitea` repository.
+
+ When the chart is installed, the following output is displayed:
+
+ ```
+ NAME: gitea
+ LAST DEPLOYED: Tue Oct 24 12:44:55 2023
+ NAMESPACE: gitea
+ STATUS: deployed
+ REVISION: 1
+ TEST SUITE: None
+ NOTES:
+ CHART NAME: gitea
+ CHART VERSION: 1.0.6
+ APP VERSION: 1.20.5
+
+ ** Please be patient while the chart is being deployed **
+
+ 1. Get the Gitea URL:
+
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ Watch the status with: 'kubectl get svc --namespace gitea -w gitea'
+
+ export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
+ echo "Gitea URL: http://$SERVICE_IP/"
+
+ WARNING: You did not specify a Root URL for Gitea. The rendered URLs in Gitea may not show correctly. In order to set a root URL use the rootURL value.
+
+ 2. Get your Gitea login credentials by running:
+
+ echo Username: bn_user
+ echo Password: $(kubectl get secret --namespace gitea gitea -o jsonpath="{.data.admin-password}" | base64 -d)
+ ```
+
+1. Watch the `gitea` LoadBalancer service until an external IP is available:
+
+ ```
+ kubectl get svc gitea --namespace gitea --watch
+ ```
+
+1. When the external IP for the `gitea` LoadBalancer service is available, run the commands provided in the output of the installation command to get the Gitea URL:
+
+ ```
+ export SERVICE_IP=$(kubectl get svc --namespace gitea gitea --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
+ echo "Gitea URL: http://$SERVICE_IP/"
+ ```
+
+ :::note
+ Alternatively, you can run the following command to forward a local port to a port on the Gitea Pod:
+
+ ```
+ POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=gitea -o jsonpath='{.items[0].metadata.name}')
+ kubectl port-forward pod/$POD_NAME 8080:3000
+ ```
+ :::
+
+1. In a browser, go to the Gitea URL to confirm that you can see the welcome page for the application:
+
+
+
+ [View a larger version of this image](/images/gitea-app.png)
+
+1. Uninstall the Helm chart:
+
+ ```
+ helm uninstall gitea --namespace gitea
+ ```
+ This command removes all the Kubernetes components associated with the chart and uninstalls the `gitea` release.
+
+1. Delete the namespace:
+
+ ```
+ kubectl delete namespace gitea
+ ```
+
+## Next Step
+
+Define preflight checks and add them to the Gitea Helm chart. See [Add a Preflight Spec to the Chart](tutorial-preflight-helm-add-spec).
+
+## Related Topics
+
+* [Helm Install](https://helm.sh/docs/helm/helm_install/)
+* [Helm Uninstall](https://helm.sh/docs/helm/helm_uninstall/)
+* [Helm Package](https://helm.sh/docs/helm/helm_package/)
+* [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea)
\ No newline at end of file
diff --git a/docs/reference/tutorial-preflight-helm-install-kots.mdx b/docs/reference/tutorial-preflight-helm-install-kots.mdx
new file mode 100644
index 0000000000..7844cb4aae
--- /dev/null
+++ b/docs/reference/tutorial-preflight-helm-install-kots.mdx
@@ -0,0 +1,199 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import HelmChartCr from "../partials/getting-started/_gitea-helmchart-cr.mdx"
+import KotsCr from "../partials/getting-started/_gitea-kots-app-cr.mdx"
+import K8sCr from "../partials/getting-started/_gitea-k8s-app-cr.mdx"
+import KotsVerReq from "../partials/replicated-sdk/_kots-version-req.mdx"
+
+# Step 6: Run Preflights with KOTS
+
+Create a KOTS-enabled release and then install Gitea with KOTS. This purpose of this step is to see how preflight checks automatically run in the KOTS Admin Console during installation.
+
+To run preflight checks during installation with KOTS:
+
+1. In the `gitea` directory, create a subdirectory named `manifests`:
+
+ ```
+ mkdir manifests
+ ```
+
+ You will add the files required to support installation with KOTS to this subdirectory.
+
+1. Move the Helm chart archive to `manifests`:
+
+ ```
+ mv gitea-1.0.6.tgz manifests
+ ```
+
+1. In `manifests`, create the YAML manifests required by KOTS:
+ ```
+ cd manifests
+ ```
+ ```
+ touch gitea.yaml kots-app.yaml k8s-app.yaml
+ ```
+
+1. In each of the files that you created, paste the corresponding YAML provided in the tabs below:
+
+ The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The name and chartVersion listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.
The KOTS Application custom resource enables features in the Replicated Admin Console such as branding, release notes, port forwarding, dashboard buttons, application status indicators, and custom graphs.
The YAML below provides a name for the application to display in the Admin Console, adds a custom status informer that displays the status of the gitea Deployment resource in the Admin Console dashboard, adds a custom application icon, and creates a port forward so that the user can open the Gitea application in a browser.
The Kubernetes Application custom resource supports functionality such as including buttons and links on the Replicated Admin Console dashboard. The YAML below adds an Open App button to the Admin Console dashboard that opens the application using the port forward configured in the KOTS Application custom resource.
+
+
+ [View a larger version of this image](/images/gitea-preflights-cli.png)
+
+1. Run the fourth command listed under **Option 1: Install Gitea** to install the application:
+
+ ```bash
+ helm install gitea oci://registry.replicated.com/$REPLICATED_APP/unstable/gitea
+ ```
+
+1. Uninstall and delete the namespace:
+
+ ```bash
+ helm uninstall gitea --namespace gitea
+ ```
+ ```bash
+ kubectl delete namespace gitea
+ ```
+
+## Next Step
+
+Install the application with KOTS to see how preflight checks are run from the KOTS Admin Console. See [Run Preflights with KOTS](tutorial-preflight-helm-install-kots).
+
+## Related Topics
+
+* [Running Preflight Checks](/vendor/preflight-running)
+* [Installing with Helm](/vendor/install-with-helm)
\ No newline at end of file
diff --git a/docs/reference/tutorial-preflight-helm-setup.mdx b/docs/reference/tutorial-preflight-helm-setup.mdx
new file mode 100644
index 0000000000..6dd81987d7
--- /dev/null
+++ b/docs/reference/tutorial-preflight-helm-setup.mdx
@@ -0,0 +1,44 @@
+# Introduction and Setup
+
+This topic provides a summary of the goals and outcomes for the tutorial and also lists the prerequisites to set up your environment before you begin.
+
+## Summary
+
+This tutorial introduces you to preflight checks. The purpose of preflight checks is to provide clear feedback about any missing requirements or incompatibilities in the customer's cluster _before_ they install or upgrade an application. Thorough preflight checks provide increased confidence that an installation or upgrade will succeed and help prevent support escalations.
+
+Preflight checks are part of the [Troubleshoot](https://troubleshoot.sh/) open source project, which is maintained by Replicated.
+
+In this tutorial, you use a sample Helm chart to learn how to:
+
+* Define custom preflight checks in a Kubernetes Secret in a Helm chart
+* Package a Helm chart and add it to a release in the Replicated Vendor Portal
+* Run preflight checks using the Helm CLI
+* Run preflight checks in the Replicated KOTS Admin Console
+
+## Set Up the Environment
+
+Before you begin, do the following to set up your environment:
+
+* Ensure that you have kubectl access to a Kubernetes cluster. You can use any cloud provider or tool that you prefer to create a cluster, such as Google Kubernetes Engine (GKE), Amazon Web Services (AWS), or minikube.
+
+ For information about installing kubectl and configuring kubectl access to a cluster, see the following in the Kubernetes documentation:
+ * [Install Tools](https://kubernetes.io/docs/tasks/tools/)
+ * [Command line tool (kubectl)](https://kubernetes.io/docs/reference/kubectl/)
+
+* Install the Helm CLI. To install the Helm CLI using Homebrew, run:
+
+ ```
+ brew install helm
+ ```
+
+ For more information, including alternative installation options, see [Install Helm](https://helm.sh/docs/intro/install/) in the Helm documentation.
+
+* Create a vendor account to access the Vendor Portal. See [Creating a Vendor Portal](/vendor/vendor-portal-creating-account).
+
+ :::note
+ If you do not yet have a Vendor Portal team to join, you can sign up for a trial account. By default, trial accounts do not include access to Replicated KOTS. To get access to KOTS with your trial account so that you can complete this and other tutorials, contact Replicated at contact@replicated.com.
+ :::
+
+## Next Step
+
+Get the sample Bitnami Helm chart and test installation with the Helm CLI. See [Step 1: Get the Sample Chart and Test](/vendor/tutorial-preflight-helm-get-chart)
\ No newline at end of file
diff --git a/docs/reference/using-third-party-registry-proxy.mdx b/docs/reference/using-third-party-registry-proxy.mdx
new file mode 100644
index 0000000000..1dced888f3
--- /dev/null
+++ b/docs/reference/using-third-party-registry-proxy.mdx
@@ -0,0 +1,72 @@
+# Using a Registry Proxy for Helm Air Gap Installations
+
+This topic describes how to connect the Replicated proxy registry to a Harbor or jFrog Artifactory instance to support pull-through image caching. It also includes information about how to set up replication rules in Harbor for image mirroring.
+
+## Overview
+
+For applications distributed with Replicated, the [Replicated proxy registry](/vendor/private-images-about) grants proxy, or _pull-through_, access to application images without exposing registry credentials to customers.
+
+Users can optionally connect the Replicated proxy registry with their own [Harbor](https://goharbor.io) or [jFrog Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation) instance to proxy and cache the images that are required for installation on demand. This can be particularly helpful in Helm installations in air-gapped environments because it allows users to pull and cache images from an internet-connected machine, then access the cached images during installation from a machine with limited or no outbound internet access.
+
+In addition to the support for on-demand pull-through caching, connecting the Replicated proxy registry to a Harbor or Artifactory instance also has the following benefits:
+* Registries like Harbor or Artifactory typically support access controls as well as scanning images for security vulnerabilities
+* With Harbor, users can optionally set up replication rules for image mirroring, which can be used to improve data availability and reliability
+
+## Limtiation
+
+Artifactory does not support mirroring or replication for Docker registries. If you need to set up image mirroring, use Harbor. See [Set Up Mirroring in Harbor](#harbor-mirror) below.
+
+## Connect the Replicated Proxy Registry to Harbor
+
+[Harbor](https://goharbor.io) is a popular open-source container registry. Users can connect the Replicated proxy registry to Harbor in order to cache images on demand and set up pull-based replication rules to proactively mirror images. Connecting the Replicated proxy registry to Harbor also allows customers use Harbor's security features.
+
+### Use Harbor for Pull-Through Proxy Caching {#harbor-proxy-cache}
+
+To connect the Replicated proxy registry to Harbor for pull-through proxy caching:
+
+1. Log in to Harbor and create a new replication endpoint. This endpoint connects the Replicated proxy registry to the Harbor instance. For more information, see [Creating Replication Endpoints](https://goharbor.io/docs/2.11.0/administration/configuring-replication/create-replication-endpoints/) in the Harbor documentation.
+
+1. Enter the following details for the endpoint:
+
+ * For the provider field, choose Docker Registry.
+ * For the URL field, enter `https://proxy.replicated.com` or the custom domain that is configured for the Replicated proxy registry. For more information about configuring custom domains in the Vendor Portal, see [Using Custom Domains](/vendor/custom-domains-using).
+ * For the access ID, enter the email address associated with the customer in the Vendor Portal.
+ * For the access secret, enter the customer's unique license ID. You can find the license ID in the Vendor Portal by going to **Customers > [Customer Name]**.
+
+1. Verify your configuration by testing the connection and then save the endpoint.
+
+1. After adding the Replicated proxy registry as a replication endpoint in Harbor, set up a proxy cache. This allows for pull-through image caching with Harbor. For more information, see [Configure Proxy Cache](https://goharbor.io/docs/2.11.0/administration/configure-proxy-cache/) in the Harbor documentation.
+
+1. (Optional) Add a pull-based replication rule to support image mirroring. See [Configure Image Mirroring in Harbor](#harbor-mirror) below.
+
+### Configure Image Mirroring in Harbor {#harbor-mirror}
+
+To enable image mirroring with Harbor, users create a pull-based replication rule. This periodically (or when manually triggered) pulls images from the Replicated proxy registry to store them in Harbor.
+
+The Replicated proxy regsitry exposes standard catalog and tag listing endpoints that are used by Harbor to support image mirroring:
+* The catalog endpoint returns a list of repositories built from images of the last 10 releases.
+* The tags listing endpoint lists the tags available in a given repository for those same releases.
+
+When image mirroring is enabled, Harbor uses these endpoints to build a list of images to cache and then serve.
+
+#### Limitations
+
+Image mirroring with Harbor has the following limitations:
+
+* Neither the catalog or tags listing endpoints exposed by the Replicated proxy service respect pagination requests. However, Harbor requests 1000 items at a time.
+
+* Only authenticated users can perform catalog calls or list tags. Authenticated users are those with an email address and license ID associated with a customer in the Vendor Portal.
+
+#### Create a Pull-Based Replication Rule in Harbor for Image Mirroring
+
+To configure image mirroring in Harbor:
+
+1. Follow the steps in [Use Harbor for Pull-Through Proxy Caching](#harbor-proxy-cache) above to add the Replicated proxy registry to Harbor as a replication endpoint.
+
+1. Create a **pull-based** replication rule in Harbor to mirror images proactively. For more information, see [Creating a replication rule](https://goharbor.io/docs/2.11.0/administration/configuring-replication/create-replication-rules/) in the Harbor documentation.
+
+## Use Artifactory for Pull-Through Proxy Caching
+
+[jFrog Artifactory](https://jfrog.com/help/r/jfrog-artifactory-documentation) supports pull-through caching for Docker registries.
+
+For information about how to configure a pull-through cache with Artifactory, see [Remote Repository](https://jfrog.com/help/r/jfrog-artifactory-documentation/configure-a-remote-repository) in the Artifactory documentation.
diff --git a/docs/reference/vendor-api-using.md b/docs/reference/vendor-api-using.md
deleted file mode 100644
index bf5b087ea4..0000000000
--- a/docs/reference/vendor-api-using.md
+++ /dev/null
@@ -1,32 +0,0 @@
-import ApiAbout from "../partials/vendor-api/_api-about.mdx"
-
-# Using the Vendor API v3
-
-This topic describes how to use Replicated Vendor API authentication tokens to make API calls.
-
-## About the Vendor API
-
-
+
+[View a larger version of this image](/images/application-settings.png)
+
+The following describes each of the application settings:
+
+- **Application name:** The application name is initially set when you first create the application in the Vendor Portal. You can change the name at any time so that it displays as a user-friendly name that your team can easily identify.
+- **Application slug:** The application slug is used with the Replicated CLI and with some of the KOTS CLI commands. You can click on the link below the slug to toggle between the application ID number and the slug name. The application ID and application slug are unique identifiers that cannot be edited.
+- **Service Account Tokens:** Provides a link to the the **Service Accounts** page, where you can create or remove a service account. Service accounts are paired with API tokens and are used with the Vendor API to automate tasks. For more information, see [Using Vendor API Tokens](/reference/vendor-api-using).
+- **Scheduler:** Displayed if the application has a KOTS entitlement.
+- **Danger Zone:** Lets you delete the application, and all of the licenses and data associated with the application. The delete action cannot be undone.
\ No newline at end of file
diff --git a/docs/reference/vendor-portal-creating-account.md b/docs/reference/vendor-portal-creating-account.md
new file mode 100644
index 0000000000..503de4dd8e
--- /dev/null
+++ b/docs/reference/vendor-portal-creating-account.md
@@ -0,0 +1,46 @@
+# Creating a Vendor Account
+
+To get started with Replicated, you must create a Replicated vendor account. When you create your account, you are also prompted to create an application. To create additional applications in the future, log in to the Replicated Vendor Portal and select **Create new app** from the Applications drop-down list.
+
+To create a vendor account:
+
+1. Go to the [Vendor Portal](https://vendor.replicated.com), and select **Sign up**.
+
+ The sign up page opens.
+3. Enter your email address or continue with Google authentication.
+
+ - If registering with an email, the Activate account page opens and you will receive an activation code in your email.
+
+ :::note
+ To resend the code, click **Resend it**.
+ :::
+
+ - Copy and paste the activation code into the text box and click **Activate**. Your account is now activated.
+
+ :::note
+ After your account is activated, you might have the option to accept a pending invitation, or to automatically join an existing team if the auto-join feature is enabled by your administrator. For more information about enabling the auto-join feature, see [Enable Users to Auto-join Your Team](https://docs.replicated.com/vendor/team-management#enable-users-to-auto-join-your-team).
+ :::
+
+4. On the Create your team page, enter you first name, last name, and company name. Click **Continue** to complete the setup.
+
+ :::note
+ The company name you provide is used as your team name in Vendor Portal.
+ :::
+
+ The Create application page opens.
+
+5. Enter a name for the application, such as `My-Application-Demo`. Click **Create application**.
+
+ The application is created and the Channels page opens.
+
+ :::important
+ Replicated recommends that you use a temporary name for the application at this time such as `My-Application-Demo` or `My-Application-Test`.
+
+ Only use an official name for your application when you have completed testing and are ready to distribute the application to your customers.
+
+ Replicated recommends that you use a temporary application name for testing because you are not able to restore or modify previously-used application names or application slugs in the Vendor Portal.
+ :::
+
+## Next Step
+
+Invite team members to collaborate with you in Vendor Portal. See [Invite Members](team-management#invite-members).
diff --git a/docs/reference/vendor-portal-manage-app.md b/docs/reference/vendor-portal-manage-app.md
new file mode 100644
index 0000000000..b5e87243d7
--- /dev/null
+++ b/docs/reference/vendor-portal-manage-app.md
@@ -0,0 +1,145 @@
+# Managing Applications
+
+This topic provides information about managing applications, including how to create, delete, and retrieve the slug for applications in the Replicated Vendor Portal and with the Replicated CLI.
+
+For information about creating and managing application with the Vendor API v3, see the [apps](https://replicated-vendor-api.readme.io/reference/createapp) section in the Vendor API v3 documentation.
+
+## Create an Application
+
+Teams can create one or more applications. It is common to create multiple applications for testing purposes.
+
+### Vendor Portal
+
+To create a new application:
+
+1. Log in to the [Vendor Portal](https://vendor.replicated.com/). If you do not have an account, see [Creating a Vendor Account](/vendor/vendor-portal-creating-account).
+
+1. In the top left of the page, open the application drop down and click **Create new app...**.
+
+
+
+ [View a larger version of this image](/images/create-new-app.png)
+
+1. On the **Create application** page, enter a name for the application.
+
+
+
+ [View a larger version of this image](/images/create-application-page.png)
+
+ :::important
+ If you intend to use the application for testing purposes, Replicated recommends that you use a temporary name such as `My Application Demo` or `My Application Test`.
+
+ You are not able to restore or modify previously-used application names or application slugs.
+ :::
+
+1. Click **Create application**.
+
+### Replicated CLI
+
+To create an application with the Replicated CLI:
+
+1. Install the Replicated CLI. See [Installing the Replicated CLI](/reference/replicated-cli-installing).
+
+1. Run the following command:
+
+ ```bash
+ replicated app create APP-NAME
+ ```
+ Replace `APP-NAME` with the name that you want to use for the new application.
+
+ **Example**:
+
+ ```bash
+ replicated app create cli-app
+ ID NAME SLUG SCHEDULER
+ 1xy9t8G9CO0PRGzTwSwWFkMUjZO cli-app cli-app kots
+ ```
+
+## Get the Application Slug {#slug}
+
+Each application has a slug, which is used for interacting with the application using the Replicated CLI. The slug is automatically generated based on the application name and cannot be changed.
+
+### Vendor Portal
+
+To get an application slug in the Vendor Portal:
+
+1. Log in to the [Vendor Portal](https://vendor.replicated.com/) and go to **_Application Name_ > Settings**.
+
+1. Under **Application Slug**, copy the slug.
+
+
+
+ [View a larger version of this image](/images/application-settings.png)
+
+### Replicated CLI
+
+To get an application slug with the Replicated CLI:
+
+1. Install the Replicated CLI. See [Installing the Replicated CLI](/reference/replicated-cli-installing).
+
+1. Run the following command:
+
+ ```bash
+ replicated app ls APP-NAME
+ ```
+ Replace `APP-NAME` with the name of the target application. Or, exclude `APP-NAME` to list all applications in the team.
+
+ **Example:**
+
+ ```bash
+ replicated app ls cli-app
+ ID NAME SLUG SCHEDULER
+ 1xy9t8G9CO0PRGzTwSwWFkMUjZO cli-app cli-app kots
+ ```
+
+1. Copy the value in the `SLUG` field.
+
+## Delete an Application
+
+When you delete an application, you also delete all licenses and data associated with the application. You can also optionally delete all images associated with the application from the Replicated registry. Deleting an application cannot be undone.
+
+### Vendor Portal
+
+To delete an application in the Vendor Portal:
+
+1. Log in to the [Vendor Portal](https://vendor.replicated.com/) and go to **_Application Name_ > Settings**.
+
+1. Under **Danger Zone**, click **Delete App**.
+
+
+
+ [View a larger version of this image](/images/application-settings.png)
+
+1. In the **Are you sure you want to delete this app?** dialog, enter the application name. Optionally, enter your password if you want to delete all images associated with the application from the Replicated registry.
+
+
+
+ [View a larger version of this image](/images/delete-app-dialog.png)
+
+1. Click **Delete app**.
+
+### Replicated CLI
+
+To delete an application with the Replicated CLI:
+
+1. Install the Replicated CLI. See [Installing the Replicated CLI](/reference/replicated-cli-installing).
+
+1. Run the following command:
+
+ ```bash
+ replicated app delete APP-NAME
+ ```
+ Replace `APP-NAME` with the name of the target application.
+
+1. When prompted, type `yes` to confirm that you want to delete the application.
+
+ **Example:**
+
+ ```bash
+ replicated app delete deletion-example
+ • Fetching App ✓
+ ID NAME SLUG SCHEDULER
+ 1xyAIzrmbvq... deletion-example deletion-example kots
+ Delete the above listed application? There is no undo: yes█
+ • Deleting App ✓
+ ```
\ No newline at end of file
diff --git a/docs/vendor/testing-supported-clusters.md b/docs/vendor/testing-supported-clusters.md
index 8f97a7f649..e7c319eabd 100644
--- a/docs/vendor/testing-supported-clusters.md
+++ b/docs/vendor/testing-supported-clusters.md
@@ -21,7 +21,7 @@ Compatibility Matrix supports creating [kind](https://kind.sigs.k8s.io/) cluster