You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -162,61 +160,18 @@ The following example shows an **Open App** link on the dashboard of the Admin C
162
160
163
161
[View a larger version of this image](/images/gitea-open-app.png)
164
162
165
-
## Examples
163
+
## Example: NGINX Application with ClusterIP and NodePort Services
166
164
167
-
This section provides examples of how to configure the `ports` key to port-forward a service in existing cluster installations and add links to services on the Admin Console dashboard.
168
-
169
-
### Example: Bitnami Gitea Helm Chart with LoadBalancer Service
170
-
171
-
This example uses a KOTS Application custom resource and a Kubernetes SIG Application custom resource to configure port forwarding for the Bitnami Gitea Helm chart in existing cluster installations, and add a link to the port-forwarded service on the Admin Console dashboard. To view the Gitea Helm chart source, see [bitnami/gitea](https://github.com/bitnami/charts/blob/main/bitnami/gitea) in GitHub.
165
+
The following example demonstrates how to link to a port-forwarded ClusterIP service for existing cluster KOTS installations. It also shows how to use the `ports` key to add a link to a NodePort service for Embedded Cluster or kURL installations. Although the primary purpose of the `ports` key is to port forward services for existing cluster KOTS installations, it is also possible to use the `ports` key so that links to NodePort services for Embedded Cluster or kURL installations use the hostname in the browser. For information about exposing NodePort services for Embedded Cluster or kURL installations, see [Exposing Services Using NodePorts](kurl-nodeport-services).
172
166
173
167
To test this example:
174
168
175
-
1. Pull version 1.0.6 of the Gitea Helm chart from Bitnami:
1. Add the `gitea-1.0.6.tgz` chart archive to a new, empty release in the Vendor Portal along with the `kots-app.yaml`, `k8s-app.yaml`, and `gitea.yaml` files provided below. Promote to the channel that you use for internal testing. For more information, see [Manage Releases with the Vendor Portal](releases-creating-releases).
<p>Based on the <a href="https://github.com/bitnami/charts/blob/main/bitnami/gitea/templates/svc.yaml">templates/svc.yaml</a> and <a href="https://github.com/bitnami/charts/blob/main/bitnami/gitea/values.yaml">values.yaml</a> files in the Gitea Helm chart, the following KOTS Application custom resource adds port 3000 to the port forward tunnel and maps local port 8888. Port 3000 is the container port of the Pod where the <code>gitea</code> service runs.</p>
<p>The Kubernetes Application custom resource lists the same URL as the `ports.applicationUrl` field in the KOTS Application custom resource (`"http://nginx"`). This adds a link to the port-forwarded service from the Admin Console dashboard. It also triggers KOTS to rewrite the URL to use the hostname in the browser and append the specified `localPort`. The label to be used for the link in the Admin Console is "Open App".</p>
<p>The KOTS HelmChart custom resource provides instructions to KOTS about how to deploy the Helm chart. The <code>name</code> and <code>chartVersion</code> listed in the HelmChart custom resource must match the name and version of a Helm chart archive in the release. Each Helm chart archive in a release requires a unique HelmChart custom resource.</p>
199
-
<h5>YAML</h5>
200
-
<GiteaHelmChart/>
201
-
</TabItem>
202
-
</Tabs>
203
-
204
-
1. Install the release to confirm that the service was port-forwarded successfully. To test the port forward, click **Open App** on the Admin Console dashboard after the application reaches a Ready state. For more information, see [Online Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster).
205
-
206
-
### Example: NGINX Application with ClusterIP and NodePort Services
207
-
208
-
The following example demonstrates how to link to a port-forwarded ClusterIP service for existing cluster installations.
209
-
210
-
It also shows how to use the `ports` key to add a link to a NodePort service for kURL installations. Although the primary purpose of the `ports` key is to port forward services for existing cluster installations, it is also possible to use the `ports` key so that links to NodePort services for Embedded Cluster or kURL installations use the hostname in the browser. For information about exposing NodePort services for Embedded Cluster or kURL installations, see [Exposing Services Using NodePorts](kurl-nodeport-services).
211
-
212
-
To test this example:
213
-
214
-
1. Add the `example-service.yaml`, `example-deployment.yaml`, `kots-app.yaml`, and `k8s-app.yaml` files provided below to a new, empty release in the Vendor Portal. Promote to the channel that you use for internal testing. For more information, see [Manage Releases with the Vendor Portal](releases-creating-releases).
169
+
1. Add the `example-service.yaml`, `example-deployment.yaml`, `kots-app.yaml`, `k8s-app.yaml`, and `embedded-cluster.yaml` files provided below to a new, empty release in the Vendor Portal. Promote to the channel that you use for internal testing. For more information, see [Manage Releases with the Vendor Portal](releases-creating-releases).
<p>The YAML below contains ClusterIP and NodePort specifications for a service named <code>nginx</code>. Each specification uses the <code>kots.io/when</code> annotation with the Replicated IsKurl templatefunction to conditionally include the service based on the installation type (existing cluster or kURL cluster). For more information, see <a href="/vendor/packaging-include-resources">Conditionally Including or Excluding Resources</a> and <a href="/reference/template-functions-static-context#iskurl">IsKurl</a>.</p>
174
+
<p>The YAML below contains ClusterIP and NodePort specifications for a service named <code>nginx</code>. Each specification uses the <code>kots.io/when</code> annotation with the Replicated <a href="/reference/template-functions-static-context#distribution">Distribution</a> template function to conditionally include the service based on the installation type (existing cluster or Embedded Cluster/kURL cluster). For more information, see <a href="/vendor/packaging-include-resources">Conditionally Including or Excluding Resources</a>.</p>
220
175
<p>As shown below, both the ClusterIP and NodePort <code>nginx</code> services are exposed on port 80.</p>
<p>To install your application with Embedded Cluster, an Embedded Cluster Config must be present in the release. At minimum, the Embedded Cluster Config sets the version of Embedded Cluster that will be installed. You can also define several characteristics about the cluster.</p>
200
+
<h5>YAML</h5>
201
+
<EcCr/>
202
+
</TabItem>
242
203
</Tabs>
243
204
244
205
1. Install the release into an existing cluster and confirm that the service was port-forwarded successfully by clicking **Open App** on the Admin Console dashboard. For more information, see [Online Installation in Existing Clusters with KOTS](/enterprise/installing-existing-cluster).
245
206
246
-
1. If there is not already a kURL installer promoted to the channel, add a kURL installer to the release to support kURL installs. For more information, see [Create a kURL Installer](/vendor/packaging-embedded-kubernetes).
247
-
248
-
1. Install the release on a VM and confirm that the service was exposed successfully. To test the port forward, click **Open App** on the Admin Console dashboard after the application reaches a Ready state. For more information, see [Online Installation with kURL](/enterprise/installing-kurl).
207
+
1. Install the release on a VM and confirm that you can open the application by clicking **Open App** on the Admin Console dashboard. For more information, see [Online Installation with Embedded Cluster](/enterprise/installing-embedded) or [Online Installation with kURL](/enterprise/installing-kurl).
249
208
250
209
:::note
251
210
Ensure that the VM where you install allows HTTP traffic.
211
+
:::
212
+
213
+
:::note
214
+
If you used Replicated Compatibility Matrix to create the VM, follow the steps in [Expose Ports on Running VMs](/vendor/testing-vm-create#expose-ports-on-running-vms) to add these DNS records to the VM:
215
+
* A DNS record with a **Target Port** of **30000** to get a hostname where you can access the Admin Console
216
+
* A DNS record with a **Target Port** of **8888** to get a hostname where you can access the NGINX application
0 commit comments