Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 37 additions & 58 deletions logging/Azure_log_analytics_workspaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,34 +2,32 @@

## Introduction

If you use Microsoft Azure and want to use Azure Monitor *instead* of OpenSearch
to explore, filter, and report on log messages from a SAS Viya environment, you
can deploy the logging solution described in this document. This solution uses a
combination of Fluent Bit and an Azure Log Analytics workspace to handle log
messages so that they can be accessed by Azure Monitor. Please note that
the development work in this solution has been limited, and is primarily a
proof of concept.

**Note: Azure Monitor and Azure Log Analytics are optional features of
Microsoft Azure, and they require agreement to additional licensing terms and
additional charges. You must understand the terms and charges before you
If you use Microsoft Azure and want to use Azure Monitor *instead* of OpenSearch
to explore, filter, and report on log messages from a SAS Viya environment, you
can deploy the alternate logging solution described in this document. This solution
uses a combination of Fluent Bit and an Azure Log Analytics workspace to handle log
messages so that they can be accessed by Azure Monitor. Please note that the
development work in this solution has been limited, and is primarily a proof of
concept.

**Note: Azure Monitor and Azure Log Analytics are optional features of
Microsoft Azure, and they require agreement to additional licensing terms and
additional charges. You must understand the terms and charges before you
deploy the solution described in this document.**

## Technical Overview

In this solution, log messages are collected by the Fluent Bit
pods that are part of the standard logging solution and that are deployed
cluster-wide by using a Kubernetes DaemonSet. These Fluent Bit pods can
parse and process log messages from all SAS Viya components, including
third-party products. As a result, log messages are handled consistently,
regardless of the original source.
In this alternate solution, log messages are collected by the Fluent Bit
pods that are part of the standard logging solution and that are deployed
cluster-wide by using a Kubernetes DaemonSet. These Fluent Bit pods can
parse and process log messages from all SAS Viya components, including
third-party products. As a result, log messages are handled consistently,
regardless of the original source.

In the standard solution, Fluent Bit sends the log messages to OpenSearch.
In this solution, the log messages are loaded into a Log Analytics workspace
as a "custom log" source. You can then use Azure Monitor to explore, filter, and
report on the collected log messages. By default, this solution also includes
the Event Router, with is a component that surfaces Kubernetes events as
pseudo-log messages beside the log messages collected by Fluent Bit.
In the standard solution, Fluent Bit sends the log messages to OpenSearch.
In this solution, the log messages are loaded into a Log Analytics workspace
as a "custom log" source. You can then use Azure Monitor to explore, filter, and
report on the collected log messages.

## Deploy the Fluent Bit and Azure Log Analytics Solution

Expand Down Expand Up @@ -74,37 +72,18 @@ This command returns two shared keys, labeled ***"primarySharedKey"*** and ***"s
/logging/bin/deploy_logging_azmonitor.sh
```

You can also deploy individual components:
- Event Router
```bash
/logging/bin/deploy_eventroutersh
```
- Fluent Bit
```bash
/logging/bin/deploy_fluentbit_azmonitor.sh
```

## Remove the Fluent Bit and Azure Log Analytics Solution

To remove all logging components for this solution, issue this command:
```bash
/logging/bin/remove_logging_azmonitor.sh
```
By default, this script does not delete the namespace, but it does delete configmaps and secrets that were created by the deployment script.

To remove individual components, issue these commands:
- Event Router
```bash
/logging/bin/remove_eventroutersh
```
- Fluent Bit
```bash
/logging/bin/remove_fluentbit_azmonitor.sh
```
By default, this script does not delete the namespace, but it does delete configmaps and secrets that were created by the deployment script. If you would like to delete the namespace as well, set the environment variable `LOG_DELETE_NAMESPACE_ON_REMOVE` to
*'true'* prior to running the script.

## Using Connection Information From a Kubernetes Secret

The deployment script creates a Kubernetes secret named `connection-info-azmonitor`containing the connection information.
The deployment script creates a Kubernetes secret named `connection-info-azmonitor`containing the connection information.
This ensures that the connection information is available in case the Fluent Bit pods are
restarted or new nodes are added to the cluster. This secret is created
in the same namespace into which the Fluent Bit pods are deployed. If this secret already exists
Expand All @@ -115,11 +94,11 @@ when you run the deployment script, the script obtains the connection informatio
After deploying this solution, the collected log messages appear as
a new table, **viya_logs_CL**, in the ***Custom Logs*** grouping within the
specified Log Analytics workspace. The structure of this table is similar
to the structure of the log messages that are surfaced in OpenSearch Dashboards when using the
to the structure of the log messages that are surfaced in OpenSearch Dashboards when using the
standard logging solution. However, due to features of the Azure
API, the names of some fields are slightly different. The tables feature
a flattened data model, so multi-level JSON fields appear as multiple
fields with the JSON hierarchy embedded in each field's name. In addition, a
fields with the JSON hierarchy embedded in each field's name. In addition, a
suffix is added to the name of most of the fields to indicate the field's data type,
such as ***_s*** for string fields and ***_d*** for a numeric (double) field.

Expand All @@ -131,23 +110,23 @@ appear in the **viya_logs_CL** table as **kube_namespace_s** and **kube_pod_s**.

## Using the Data

Although a full explanation of how you can use the collected log messages in
Azure Monitor and the Log Analytics workspace is out of scope for this document,
Although a full explanation of how you can use the collected log messages in
Azure Monitor and the Log Analytics workspace is out of scope for this document,
here are some tips to help you get started.

### Kusto Queries
Kusto is a powerful query language used by Log Analytics workspaces
and Azure Monitor. To access an interactive Kusto query window in Azure Monitor,
select your Log Analytics workspace in Azure Monitor, then select **Logs** from
the **General** area of the toolbar on the left side of the window.
and Azure Monitor. To access an interactive Kusto query window in Azure Monitor,
select your Log Analytics workspace in Azure Monitor, then select **Logs** from
the **General** area of the toolbar on the left side of the window.

![Azure Log Analytics Workspace - Kusto Query](../img/screenshot-kustoquery-chart.png)

The query window enables you perform these actions:
- Enter Kusto queries.
- Display query results as charts or graphs.
- Export the query results.
- Add query results to an Azure dashboard.
- Add query results to an Azure dashboard.

You can also use Kusto queries as part of Azure Monitor workbooks.

Expand All @@ -159,15 +138,15 @@ Here are some sample Kusto queries.
This Kusto query returns all of the log messages collected in the past
five minutes.
```
viya_logs_CL
viya_logs_CL
| where TimeGenerated > ago(5m)
```
If a query returns a large number of results, only the first
10,000 results are shown. A message is displayed if the number of results is limited.

### Sample Query #2: Display Selected Fields
This Kusto query also returns the log messages collected in the past five minutes, but it
returns only specific fields. Limiting the information returned might make it easier
This Kusto query also returns the log messages collected in the past five minutes, but it
returns only specific fields. Limiting the information returned might make it easier
to interpret the results.
```
viya_logs_CL
Expand All @@ -177,8 +156,8 @@ viya_logs_CL
### Sample Query #3: Display Message Counts by Message Severity and Source
The following query also returns the number of log messages generated over the last
five minutes, but also summarizes the messages by message severity (**Level**) and source (**logsource_s**).
The query returns the results in the form of a table. To view the results as
a chart, click **Chart** item in the menu above the results output.
The query returns the results in the form of a table. To view the results as
a chart, click **Chart** item in the menu above the results output.
```
viya_logs_CL
| where TimeGenerated > ago(5m)
Expand Down
101 changes: 0 additions & 101 deletions logging/Differences_between_ODFE_and_OpenSearch.md

This file was deleted.

14 changes: 7 additions & 7 deletions logging/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@ To learn how to deploy the logging component, see [Getting Started](https://docu

## Important Information about OpenSearch and OpenSearch Dashboards

>As of release 1.2.0, this project uses OpenSearch and OpenSearch Dashboards.

**Notes:**

* OpenSearch replaces Elasticsearch.
* OpenSearch Dashboards replaces Kibana.
* Some configuration options, environment variables, and other aspects of this project might still include references to the prior product names. This is intentional. Doing so supports backward compatibility and continuity for users of this project. These references might change at a later date.
This project uses OpenSearch and OpenSearch Dashboards and this has been
true since version 1.2.0 released in June of 2022. Prior to that, the
project used Elasticsearch and Kibana. To support backward compitibility,
some configuration options, environment variables, and other aspects of
this project still include references to those product names. References
to Elasticsearch and Kibana should be understood to refer to OpenSearch
and OpenSearch Dashboards respectively.
5 changes: 2 additions & 3 deletions samples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ each deployment file.

You customize your logging deployment by specifying values in `user.env` and
`*.yaml` files. These files are stored in a local directory outside of your
repository that is identified by the `USER_DIR` environment variable.
repository that is identified by the `USER_DIR` environment variable.
For information about the customization process, see [Create the Deployment Directory](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=p15fe8611w9njkn1fucwbvlz8tyg.htm) in the SAS Viya Monitoring for Kubernetes Help Center.

The customization files in each sample provide a starting point for the
Expand Down Expand Up @@ -61,8 +61,7 @@ from SAS Viya components.
* [ingress](ingress) - Deploys using host-based or path-based ingress.
* [namespace-monitoring](namespace-monitoring) - Separates cluster monitoring
from SAS Viya monitoring.
* [tls](tls) - Enables TLS encryption for both in-cluster and ingress. Options
for either host-based and path-based ingress are included.


## Other Samples

Expand Down
9 changes: 5 additions & 4 deletions samples/generic-base/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ reference links and variable listings.

You customize your deployment by specifying values in `user.env` and `*.yaml`
files. These files are stored in a local directory outside of your
repository that is identified by the `USER_DIR` environment variable.
repository that is identified by the `USER_DIR` environment variable.
For information about the customization process, see [Create the Deployment Directory](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=p15fe8611w9njkn1fucwbvlz8tyg.htm) in the SAS Viya Monitoring for Kubernetes Help Center.

The customization files in this sample provide a starting point for
Expand All @@ -25,7 +25,8 @@ your customization files after you add the values in this sample.
After you finish modifying the customization files, deploy the metric-monitoring and
log-monitoring components. See [Deploy](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n1rhzwx0mcnnnun17q11v85bspyk.htm) in the SAS Viya Monitoring for Kubernetes Help Center.

## Grafana Dashboards
## Grafana Dashboards and Alerting

In addition to customizing the deployment, you can also use this sample to add
your own Grafana dashboards. See [Add More Grafana Dashboards](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n1sg9bc44ow616n1sw7l3dlsbmgz.htm) for details.
In addition to customizing the deployment, this sample show how you can add
your own Grafana dashboards, alerts, contact points and notifications policies. See [Add More Grafana Dashboards](https://documentation.sas.com/?cdcId=obsrvcdc&cdcVersion=v_003&docsetId=obsrvdply&docsetTarget=n1sg9bc44ow616n1sw7l3dlsbmgz.htm), [Add More Grafana Alerts](***NEED***LINK***) and
[Configure Contact Points, and Notification Policies](***NEED***LINK***) for details.
7 changes: 7 additions & 0 deletions samples/generic-base/monitoring/alerting/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# User-Provided Alerts, Contact Points and Notification Policies

You can use the `$USER_DIR/monitoring/alerting` directory to supply
a set of additional Grafana alerts to deploy with the monitoring
components. You can also use the directory to define Grafana contact
points and notification policies . See [Add More Grafana Alerts](***NEED***LINK***) and
[Configure Contact Points, and Notification Policies](***NEED***LINK***) for details.
Loading