diff --git a/CHANGELOG.md b/CHANGELOG.md index 595dc1db..9b69358c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,9 @@ # SAS Viya Monitoring for Kubernetes +## Unreleased +* **Overall** + * [REMOVAL] Removed the previously deprecated TLS sample. Deploying with TLS enabled has been the default since version +1.2.15 (18JUL23). + ## Version 1.2.39 (20JUN2025) * **Metrics** diff --git a/logging/Azure_log_analytics_workspaces.md b/logging/Azure_log_analytics_workspaces.md index 6bbfca07..2b9b67b4 100644 --- a/logging/Azure_log_analytics_workspaces.md +++ b/logging/Azure_log_analytics_workspaces.md @@ -2,34 +2,32 @@ ## Introduction -If you use Microsoft Azure and want to use Azure Monitor *instead* of OpenSearch -to explore, filter, and report on log messages from a SAS Viya environment, you -can deploy the logging solution described in this document. This solution uses a -combination of Fluent Bit and an Azure Log Analytics workspace to handle log -messages so that they can be accessed by Azure Monitor. Please note that -the development work in this solution has been limited, and is primarily a -proof of concept. - -**Note: Azure Monitor and Azure Log Analytics are optional features of -Microsoft Azure, and they require agreement to additional licensing terms and -additional charges. You must understand the terms and charges before you +If you use Microsoft Azure and want to use Azure Monitor *instead* of OpenSearch +to explore, filter, and report on log messages from a SAS Viya environment, you +can deploy the alternate logging solution described in this document. This solution +uses a combination of Fluent Bit and an Azure Log Analytics workspace to handle log +messages so that they can be accessed by Azure Monitor. Please note that the +development work in this solution has been limited, and is primarily a proof of +concept. + +**Note: Azure Monitor and Azure Log Analytics are optional features of +Microsoft Azure, and they require agreement to additional licensing terms and +additional charges. You must understand the terms and charges before you deploy the solution described in this document.** ## Technical Overview -In this solution, log messages are collected by the Fluent Bit -pods that are part of the standard logging solution and that are deployed -cluster-wide by using a Kubernetes DaemonSet. These Fluent Bit pods can -parse and process log messages from all SAS Viya components, including -third-party products. As a result, log messages are handled consistently, -regardless of the original source. +In this alternate solution, log messages are collected by the Fluent Bit +pods that are part of the standard logging solution and that are deployed +cluster-wide by using a Kubernetes DaemonSet. These Fluent Bit pods can +parse and process log messages from all SAS Viya components, including +third-party products. As a result, log messages are handled consistently, +regardless of the original source. -In the standard solution, Fluent Bit sends the log messages to OpenSearch. -In this solution, the log messages are loaded into a Log Analytics workspace -as a "custom log" source. You can then use Azure Monitor to explore, filter, and -report on the collected log messages. By default, this solution also includes -the Event Router, with is a component that surfaces Kubernetes events as -pseudo-log messages beside the log messages collected by Fluent Bit. +In the standard solution, Fluent Bit sends the log messages to OpenSearch. +In this solution, the log messages are loaded into a Log Analytics workspace +as a "custom log" source. You can then use Azure Monitor to explore, filter, and +report on the collected log messages. ## Deploy the Fluent Bit and Azure Log Analytics Solution @@ -74,37 +72,18 @@ This command returns two shared keys, labeled ***"primarySharedKey"*** and ***"s /logging/bin/deploy_logging_azmonitor.sh ``` -You can also deploy individual components: -- Event Router -```bash -/logging/bin/deploy_eventroutersh -``` -- Fluent Bit -```bash -/logging/bin/deploy_fluentbit_azmonitor.sh -``` - ## Remove the Fluent Bit and Azure Log Analytics Solution To remove all logging components for this solution, issue this command: ```bash /logging/bin/remove_logging_azmonitor.sh ``` -By default, this script does not delete the namespace, but it does delete configmaps and secrets that were created by the deployment script. - -To remove individual components, issue these commands: -- Event Router -```bash -/logging/bin/remove_eventroutersh -``` -- Fluent Bit -```bash -/logging/bin/remove_fluentbit_azmonitor.sh -``` +By default, this script does not delete the namespace, but it does delete configmaps and secrets that were created by the deployment script. If you would like to delete the namespace as well, set the environment variable `LOG_DELETE_NAMESPACE_ON_REMOVE` to +*'true'* prior to running the script. ## Using Connection Information From a Kubernetes Secret -The deployment script creates a Kubernetes secret named `connection-info-azmonitor`containing the connection information. +The deployment script creates a Kubernetes secret named `connection-info-azmonitor`containing the connection information. This ensures that the connection information is available in case the Fluent Bit pods are restarted or new nodes are added to the cluster. This secret is created in the same namespace into which the Fluent Bit pods are deployed. If this secret already exists @@ -115,11 +94,11 @@ when you run the deployment script, the script obtains the connection informatio After deploying this solution, the collected log messages appear as a new table, **viya_logs_CL**, in the ***Custom Logs*** grouping within the specified Log Analytics workspace. The structure of this table is similar -to the structure of the log messages that are surfaced in OpenSearch Dashboards when using the +to the structure of the log messages that are surfaced in OpenSearch Dashboards when using the standard logging solution. However, due to features of the Azure API, the names of some fields are slightly different. The tables feature a flattened data model, so multi-level JSON fields appear as multiple -fields with the JSON hierarchy embedded in each field's name. In addition, a +fields with the JSON hierarchy embedded in each field's name. In addition, a suffix is added to the name of most of the fields to indicate the field's data type, such as ***_s*** for string fields and ***_d*** for a numeric (double) field. @@ -131,15 +110,15 @@ appear in the **viya_logs_CL** table as **kube_namespace_s** and **kube_pod_s**. ## Using the Data -Although a full explanation of how you can use the collected log messages in -Azure Monitor and the Log Analytics workspace is out of scope for this document, +Although a full explanation of how you can use the collected log messages in +Azure Monitor and the Log Analytics workspace is out of scope for this document, here are some tips to help you get started. ### Kusto Queries Kusto is a powerful query language used by Log Analytics workspaces -and Azure Monitor. To access an interactive Kusto query window in Azure Monitor, -select your Log Analytics workspace in Azure Monitor, then select **Logs** from -the **General** area of the toolbar on the left side of the window. +and Azure Monitor. To access an interactive Kusto query window in Azure Monitor, +select your Log Analytics workspace in Azure Monitor, then select **Logs** from +the **General** area of the toolbar on the left side of the window. ![Azure Log Analytics Workspace - Kusto Query](../img/screenshot-kustoquery-chart.png) @@ -147,7 +126,7 @@ The query window enables you perform these actions: - Enter Kusto queries. - Display query results as charts or graphs. - Export the query results. - - Add query results to an Azure dashboard. + - Add query results to an Azure dashboard. You can also use Kusto queries as part of Azure Monitor workbooks. @@ -159,15 +138,15 @@ Here are some sample Kusto queries. This Kusto query returns all of the log messages collected in the past five minutes. ``` -viya_logs_CL +viya_logs_CL | where TimeGenerated > ago(5m) ``` If a query returns a large number of results, only the first 10,000 results are shown. A message is displayed if the number of results is limited. ### Sample Query #2: Display Selected Fields -This Kusto query also returns the log messages collected in the past five minutes, but it -returns only specific fields. Limiting the information returned might make it easier +This Kusto query also returns the log messages collected in the past five minutes, but it +returns only specific fields. Limiting the information returned might make it easier to interpret the results. ``` viya_logs_CL @@ -177,8 +156,8 @@ viya_logs_CL ### Sample Query #3: Display Message Counts by Message Severity and Source The following query also returns the number of log messages generated over the last five minutes, but also summarizes the messages by message severity (**Level**) and source (**logsource_s**). -The query returns the results in the form of a table. To view the results as -a chart, click **Chart** item in the menu above the results output. +The query returns the results in the form of a table. To view the results as +a chart, click **Chart** item in the menu above the results output. ``` viya_logs_CL | where TimeGenerated > ago(5m) diff --git a/logging/Differences_between_ODFE_and_OpenSearch.md b/logging/Differences_between_ODFE_and_OpenSearch.md deleted file mode 100644 index 8cddb922..00000000 --- a/logging/Differences_between_ODFE_and_OpenSearch.md +++ /dev/null @@ -1,101 +0,0 @@ -# Differences between Open Distro for Elasticsearch and OpenSearch - -## Overview - -In release 1.2.0 (June 2022), the SAS Viya Monitoring for Kubernetes solution makes the following changes: - -* OpenSearch is deployed instead of the Open Distro for Elasticsearch (ODFE) distribution of Elasticsearch. -* OpenSearch Dashboards is deployed instead of Kibana. - -This topic describes the changes between the deployments. It assumes you have an understanding of the deployment process for the current log-monitoring solution including the customization process. - -**Note:** Other than the differences described in this topic, you should not see any significant differences when using the log-monitoring solution configured with OpenSearch. - -## Important Considerations - -When preparing to migrate, be sure to remember the following considerations: - -* The migration is nonreversible. You cannot reverse an OpenSearch-based deployment to a deployment that uses Open Distro for Elasticsearch without losing collected log messages. -* You cannot run an Open Distro for Elasticsearch-based deployment and an OpenSearch-based deployment in the same cluster, even if they are deployed to different Kubernetes namespaces. - -## Differences - -### Products - -The unstructured data store and search back-end component, formerly Elasticsearch, is now OpenSearch. - -The visualization and reporting application, formerly Kibana, is now OpenSearch Dashboards. - -### Helm Charts - -Although OpenSearch is virtually identical to Open Distro for Elasticsearch, the Helm chart used to deploy OpenSearch is different. This required the following changes in the customization process: - -* The names of the YAML files used to customize the deployment have changed. -* The structure of these YAML files has changed. - -See [Compiled Differences](#compiled_dif_table) for more information. - -### Deployment Topology - -The topology of the search back-end is different. - -* The Open Distro for Elasticsearch search back-end is configured by default to use eight Kubernetes pods composed of two Elasticsearch client nodes, three Elasticsearch master nodes, and three Elasticsearch data nodes. -* For the OpenSearch search back-end, the default configuration is three Kubernetes pods composed of three OpenSearch "multi-role" nodes. -* The Java memory settings have increased from 1GB to 4GB for each OpenSearch node. - -### NodePort Accessibility - -If you had not configured access via Ingress, the Open Distro for Elasticsearch-based deployment script automatically made Kibana accessible via NodePort. - -The OpenSearch deployment script does not do this automatically. If you want to make OpenSearch Dashboards accessible via NodePort, you must set the environment variable `KB_KNOWN_NODEPORT_ENABLE` to `true` before running the `logging/bin/deploy_logging.sh` script. - -When this option is set, OpenSearch Dashboards is accessible on the same port (31033) that was used by Kibana. - -**Note:** NodePorts are not suitable for production deployments. - -**Note:** As of release 1.2.1, if you do not set the environment variable `KB_KNOWN_NODEPORT_ENABLE` to `true` before running the `logging/bin/deploy_logging.sh` script, you can now use the `configure_nodeport.sh` -script after deployment. -See [Configure Access to OpenSearch](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n0l4k3bz39cw2dn131zcbat7m4r1.htm). - -### Migrate an Existing Deployment - -To migrate an existing deployment of the log monitoring project, run the `./logging/bin/deploy_logging.sh` script. -If an existing deployment is detected in the namespace, that deployment -is migrated to the OpenSearch-based deployment. - -### Remove OpenSearch-based Log Monitoring - -To remove log monitoring that uses OpenSearch instead of ODFE, run the `logging/bin/remove_logging.sh` script. The new script supports all the same environment variables as the prior script. - -### Running Other Scripts - -Other than the scripts described in the [table](#compiled_dif_table), the remaining log monitoring scripts work with either search back-end. - -**Note:** When running the administrative scripts with an existing ODFE-based deployment, you must set the `LOG_SEARCH_BACKEND` environment variable to `ODFE` prior to running the script. By default, these scripts assume the search back-end is OpenSearch. - -See [Compiled Differences](#compiled_dif_table) for the list of changed script file names. - -### Compiled Differences - -Open Distro for Elasticsearch (Release 1.1.8 and earlier) | Experimental OpenSearch (Release 1.1.8) | Production OpenSearch (Release 1.2.0 and later) -----|----|---- -**Script File Name Changes between Releases** -logging/bin/deploy_logging_open.sh | logging/bin/deploy_logging_opensearch.sh | logging/bin/deploy_logging.sh -logging/bin/deploy_logging_open_openshift.sh | logging/bin/deploy_logging_opensearch_openshift.sh | logging/bin/deploy_logging_openshift.sh -logging/bin/remove_logging_open.sh | logging/bin/remove_logging_opensearch.sh | logging/bin/remove_logging.sh -logging/bin/remove_logging_open_openshift.sh | logging/bin/remove_logging_opensearch_openshift.sh | logging/bin/remove_logging_openshift.sh -**YAML File Name Changes between Releases** **Note:** The OpenSearch YAML files required format changes from the Elasticsearch YAML file. See the Ingress sample and the TLS sample for more details. | | -$USER_DIR/logging/user-values-elasticsearch-open.yaml (Elasticsearch configuration changes) | $USER_DIR/user-values-elasticsearch-opensearch.yaml | $USER_DIR/logging/user-values-opensearch.yaml -$USER_DIR/logging/user-values-elasticsearch-open.yaml (Kibana configuration changes) | $USER_DIR/user-values-osd-opensearch.yaml | $USER_DIR/logging/user-values-osd.yaml -**Kubernetes Resource Name Changes between Releases** || -**Kubernetes Pods** || -v4m-es-data-0, v4m-es-data-1, v4m-es-data-2; v4m-es-master-0, v4m-es-master-1, v4m-es-master-2; v4m-es-client--xxxxxxxxxx-xxxxx, v4m-es-client-xxxxxxxxxx-xxxxx | v4m-es-0, v4m-es-1, v4m-es-2 | v4m-search-0, v4m-search-1, v4m-search-2 -v4m-es-kibana-xxxxxxxx-xxxxx | v4m-osd-xxxxxxxxxx-xxxxx | v4m-osd-xxxxxxxxxx-xxxxx -**Kubernetes Services** || -v4m-es-client-service | v4m-es; v4m-es-headless v4m-search; | v4m-search-headless -v4m-es-data-svc | Eliminated | Eliminated -v4m-es-discovery | Eliminated | Eliminated -v4m-es-kibana-svc | v4m-osd | v4m-osd -v4m-es-client-service | v4-es | v4m-search -**Kubernetes Ingress** || -v4m-es-kibana-ing | v4m-osd | v4m-osd \ No newline at end of file diff --git a/logging/README.md b/logging/README.md index 9f3f9e77..1fe781ff 100644 --- a/logging/README.md +++ b/logging/README.md @@ -4,10 +4,10 @@ To learn how to deploy the logging component, see [Getting Started](https://docu ## Important Information about OpenSearch and OpenSearch Dashboards ->As of release 1.2.0, this project uses OpenSearch and OpenSearch Dashboards. - -**Notes:** - -* OpenSearch replaces Elasticsearch. -* OpenSearch Dashboards replaces Kibana. -* Some configuration options, environment variables, and other aspects of this project might still include references to the prior product names. This is intentional. Doing so supports backward compatibility and continuity for users of this project. These references might change at a later date. +This project uses OpenSearch and OpenSearch Dashboards and this has been +true since version 1.2.0 released in June of 2022. Prior to that, the +project used Elasticsearch and Kibana. To support backward compitibility, +some configuration options, environment variables, and other aspects of +this project still include references to those product names. References +to Elasticsearch and Kibana should be understood to refer to OpenSearch +and OpenSearch Dashboards respectively. \ No newline at end of file diff --git a/samples/README.md b/samples/README.md index 983b29bd..e9faa24b 100644 --- a/samples/README.md +++ b/samples/README.md @@ -18,7 +18,7 @@ each deployment file. You customize your logging deployment by specifying values in `user.env` and `*.yaml` files. These files are stored in a local directory outside of your -repository that is identified by the `USER_DIR` environment variable. +repository that is identified by the `USER_DIR` environment variable. For information about the customization process, see [Create the Deployment Directory](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=p15fe8611w9njkn1fucwbvlz8tyg.htm) in the SAS Viya Monitoring for Kubernetes Help Center. The customization files in each sample provide a starting point for the @@ -61,8 +61,7 @@ from SAS Viya components. * [ingress](ingress) - Deploys using host-based or path-based ingress. * [namespace-monitoring](namespace-monitoring) - Separates cluster monitoring from SAS Viya monitoring. -* [tls](tls) - Enables TLS encryption for both in-cluster and ingress. Options - for either host-based and path-based ingress are included. + ## Other Samples diff --git a/samples/generic-base/README.md b/samples/generic-base/README.md index 0d34a762..171c92b8 100644 --- a/samples/generic-base/README.md +++ b/samples/generic-base/README.md @@ -8,7 +8,7 @@ reference links and variable listings. You customize your deployment by specifying values in `user.env` and `*.yaml` files. These files are stored in a local directory outside of your -repository that is identified by the `USER_DIR` environment variable. +repository that is identified by the `USER_DIR` environment variable. For information about the customization process, see [Create the Deployment Directory](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=p15fe8611w9njkn1fucwbvlz8tyg.htm) in the SAS Viya Monitoring for Kubernetes Help Center. The customization files in this sample provide a starting point for @@ -25,7 +25,8 @@ your customization files after you add the values in this sample. After you finish modifying the customization files, deploy the metric-monitoring and log-monitoring components. See [Deploy](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n1rhzwx0mcnnnun17q11v85bspyk.htm) in the SAS Viya Monitoring for Kubernetes Help Center. -## Grafana Dashboards +## Grafana Dashboards and Alerting -In addition to customizing the deployment, you can also use this sample to add -your own Grafana dashboards. See [Add More Grafana Dashboards](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n1sg9bc44ow616n1sw7l3dlsbmgz.htm) for details. +In addition to customizing the deployment, this sample show how you can add +your own Grafana dashboards, alerts, contact points and notifications policies. See [Add More Grafana Dashboards](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n1sg9bc44ow616n1sw7l3dlsbmgz.htm), [Add More Grafana Alerts](***NEED***LINK***) and +[Configure Contact Points, and Notification Policies](***NEED***LINK***) for details. diff --git a/samples/generic-base/monitoring/alerting/README.md b/samples/generic-base/monitoring/alerting/README.md new file mode 100644 index 00000000..ede6aa8b --- /dev/null +++ b/samples/generic-base/monitoring/alerting/README.md @@ -0,0 +1,7 @@ +# User-Provided Alerts, Contact Points and Notification Policies + +You can use the `$USER_DIR/monitoring/alerting` directory to supply +a set of additional Grafana alerts to deploy with the monitoring +components. You can also use the directory to define Grafana contact +points and notification policies . See [Add More Grafana Alerts](***NEED***LINK***) and +[Configure Contact Points, and Notification Policies](***NEED***LINK***) for details. diff --git a/samples/ingress/README.md b/samples/ingress/README.md index b050b419..704abe8c 100644 --- a/samples/ingress/README.md +++ b/samples/ingress/README.md @@ -2,7 +2,7 @@ ## Overview -This sample demonstrates how to configure Kubernetes Ingress for accessing the +This sample demonstrates how to configure Kubernetes Ingress for accessing the web applications that are deployed as part of the SAS Viya Monitoring for Kubernetes solution. This sample provides information about two scenarios: @@ -15,23 +15,30 @@ These scenarios differ because of the URL that is used to access the application * For host-based Ingress, the application name is part of the host name itself (for example, `https://grafana.host.cluster.example.com/`). * For path-based Ingress, the host name is fixed and the application name is appended as a path on the URL (for example, `https://host.cluster.example.com/grafana`). +**Note:** The ability to automatically generate Kubernetes Ingress resource +definitions to permit access to the web applications was added with version +1.2.36 (released 15APR25). Depending on your requirements, this may +eliminate the need to manually configure things as demonstrated in this +sample. See the [Configure Ingress Access to Web Applications](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n0auhd4hutsf7xn169hfvriysz4e.htm#n0jiph3lcb5rmsn1g71be3cesmo8) +topic within the Help Center documentation for further information. + ## Using This Sample -**Note:** For information about the customization process, see +**Note:** For information about the customization process, see [Create the Deployment Directory](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=p15fe8611w9njkn1fucwbvlz8tyg.htm) in the SAS Viya Monitoring for Kubernetes Help Center. The customization files in this sample provide a starting point for the customization files required by a deployment that uses Kubernetes Ingress for accessing the web applications. -To use the sample customization files in your +To use the sample customization files in your deployment, complete these steps: 1. Copy the customization files from either the `host-based-ingress` -or `path-based-ingress` subdirectories to your local customization directory +or `path-based-ingress` subdirectories to your local customization directory (that is, your `USER_DIR`). -2. In the configuration files, replace all instances of - `host.cluster.example.com` with the applicable host name for your +2. In the configuration files, replace all instances of + `host.cluster.example.com` with the applicable host name for your environment. 3. (Optional) Modify the files further, as needed. @@ -42,8 +49,8 @@ SAS Viya Monitoring for Kubernetes. For more information, see ## Update the YAML Files Edit the .yaml files within your `$USER_DIR/monitoring` and `$USER_DIR/monitoring` -subdirectories. Replace any sample host names with the applicable host name -for your deployment. Specifically, you must replace `host.cluster.example.com` with +subdirectories. Replace any sample host names with the applicable host name +for your deployment. Specifically, you must replace `host.cluster.example.com` with the Ingress controller's endpoint. ## Specify TLS Certificates for Use with Ingress @@ -53,8 +60,8 @@ intra-cluster communications. The deployment scripts now automatically generate self-signed TLS certificates for this purpose if you do not specify your own. For details, see [Transport Layer Security (TLS): Digital Certificates and Kubernetes Secrets](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=p1tedn8lzgvhlyn1bzwgvqvv3p4j.htm#n1pnll5qjcigjvn15shfdhdhr0lz). This sample assumes that access to the web applications also should be secured using -TLS (that is, the web applications should be accessed via HTTPS instead of HTTP). This requires a second set of TLS -certificates that differ from those used for intra-cluster communication. However, these certificates are **not** +TLS (that is, the web applications should be accessed via HTTPS instead of HTTP). This requires a second set of TLS +certificates that differ from those used for intra-cluster communication. However, these certificates are **not** created automatically for you. You must obtain these certificates, create Kubernetes secrets with specific names, and make them available to SAS Viya Monitoring for Kubernetes. For details, see [Enable TLS for Ingress](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n0auhd4hutsf7xn169hfvriysz4e.htm#n13g4ybmjfxr2an1tuy6a20zpvw7). @@ -81,7 +88,7 @@ Ingress, the applications are available at these locations: **Note:** Be sure to replace the placeholder host names with the host names that you specified in your environment. -When you deploy using path-based Ingress, the following applications are available at these locations. +When you deploy using path-based Ingress, the following applications are available at these locations. * Grafana - `https://host.mycluster.example.com/grafana` * OpenSearch Dashboards - `https://host.mycluster.example.com/dashboards` diff --git a/samples/tls/README.md b/samples/tls/README.md deleted file mode 100644 index df2e051e..00000000 --- a/samples/tls/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# Enabling TLS for SAS Viya Monitoring for Kubernetes - -As of release 1.2.15 (18JUL23), the components of SAS Viya Monitoring for Kubernetes -are deployed with TLS *enabled by default*. Therefore, no additional steps are required and this sample is no longer necessary. - -For information about how to configure access via Kubernetes Ingress to the SAS Viya -Monitoring for Kubernetes web applications, see the [Ingress sample](../ingress/README.md). - ->***IMPORTANT NOTE: This sample is deprecated.*** - -This sample will be removed at some point in the future. \ No newline at end of file