Skip to content

Commit 9079644

Browse files
authored
Merge pull request #69727 from abrennan89/OBSDOCS-603-jan
OBSDOCS-603: Update instances of Loki Operator to use attribute
2 parents 1c84608 + f2ead3d commit 9079644

19 files changed

+33
-110
lines changed

logging/log_storage/cluster-logging-loki.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,5 +56,5 @@ include::modules/loki-rate-limit-errors.adoc[leveloffset=+1]
5656
* link:https://grafana.com/docs/loki/latest/logql/[Loki Query Language (LogQL) documentation]
5757
* link:https://loki-operator.dev/docs/howto_connect_grafana.md/[Grafana Dashboard documentation]
5858
* link:https://loki-operator.dev/docs/object_storage.md/[Loki Object Storage documentation]
59-
* link:https://loki-operator.dev/docs/api.md/#loki-grafana-com-v1-IngestionLimitSpec[Loki Operator `IngestionLimitSpec` documentation]
59+
* link:https://loki-operator.dev/docs/api.md/#loki-grafana-com-v1-IngestionLimitSpec[{loki-op} `IngestionLimitSpec` documentation]
6060
* link:https://grafana.com/docs/loki/latest/operations/storage/schema/#changing-the-schema[Loki Storage Schema documentation]

modules/cluster-logging-about.adoc

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,6 @@
44
// * logging/cluster-logging.adoc
55
// * serverless/monitor/cluster-logging-serverless.adoc
66

7-
// This module uses conditionalized paragraphs so that the module
8-
// can be re-used in associated products.
9-
107
:_mod-docs-content-type: CONCEPT
118
[id="cluster-logging-about_{context}"]
129
= About deploying the {logging-title}
@@ -20,21 +17,21 @@ Administrators and application developers can view the logs of the projects for
2017

2118
You can configure your {logging} deployment with custom resource (CR) YAML files implemented by each Operator.
2219

23-
*Red Hat Openshift Logging Operator*:
20+
*{clo}*:
2421

25-
* `ClusterLogging` (CL) - After the Operators are installed, you create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support the {logging}. The `ClusterLogging` CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The Red Hat OpenShift Logging Operator watches the `ClusterLogging` CR and adjusts the logging deployment accordingly.
22+
* `ClusterLogging` (CL) - After the Operators are installed, you create a `ClusterLogging` custom resource (CR) to schedule {logging} pods and other resources necessary to support the {logging}. The `ClusterLogging` CR deploys the collector and forwarder, which currently are both implemented by a daemonset running on each node. The {clo} watches the `ClusterLogging` CR and adjusts the logging deployment accordingly.
2623

2724
* `ClusterLogForwarder` (CLF) - Generates collector configuration to forward logs per user configuration.
2825

29-
*Loki Operator*:
26+
*{loki-op}*:
3027

31-
* `LokiStack` - Controls the Loki cluster as log store and the web proxy with OpenShift Container Platform authentication integration to enforce multi-tenancy.
28+
* `LokiStack` - Controls the Loki cluster as log store and the web proxy with {product-title} authentication integration to enforce multi-tenancy.
3229

33-
*OpenShift Elasticsearch Operator*:
30+
*{es-op}*:
3431

3532
[NOTE]
3633
====
37-
These CRs are generated and managed by the Red Hat OpenShift Elasticsearch Operator. Manual changes cannot be made without being overwritten by the Operator.
34+
These CRs are generated and managed by the {es-op}. Manual changes cannot be made without being overwritten by the Operator.
3835
====
3936

4037
* `ElasticSearch` - Configure and deploy an Elasticsearch instance as the default log store.

modules/logging-loki-reliability-hardening.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="logging-loki-reliability-hardening_{context}"]
77
= Configuring Loki to tolerate node failure
88

9-
In the {logging} 5.8 and later versions, the Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster.
9+
In the {logging} 5.8 and later versions, the {loki-op} supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster.
1010

1111
include::snippets/about-pod-affinity.adoc[]
1212

modules/logging-loki-restart-hardening.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@
66
[id="logging-loki-restart-hardening_{context}"]
77
= LokiStack behavior during cluster restarts
88

9-
In logging version 5.8 and newer versions, when an {product-title} cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during {product-title} cluster updates. This behavior is achieved by using `PodDisruptionBudget` resources. The Loki Operator provisions `PodDisruptionBudget` resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions.
9+
In logging version 5.8 and newer versions, when an {product-title} cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during {product-title} cluster updates. This behavior is achieved by using `PodDisruptionBudget` resources. The {loki-op} provisions `PodDisruptionBudget` resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions.

modules/logging-loki-zone-aware-rep.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
[id="logging-loki-zone-aware-rep_{context}"]
77
= Zone aware data replication
88

9-
In the {logging} 5.8 and later versions, the Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as `1x.extra.small`, `1x.small`, or `1x.medium,` the `replication.factor` field is automatically set to 2.
9+
In the {logging} 5.8 and later versions, the {loki-op} offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as `1x.extra.small`, `1x.small`, or `1x.medium,` the `replication.factor` field is automatically set to 2.
1010

1111
To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation.
1212

modules/network-observability-flowcollector-view.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,6 @@ spec:
8787
<1> The Agent specification, `spec.agent.type`, must be `EBPF`. eBPF is the only {product-title} supported option.
8888
<2> You can set the Sampling specification, `spec.agent.ebpf.sampling`, to manage resources. Lower sampling values might consume a large amount of computational, memory and storage resources. You can mitigate this by specifying a sampling ratio value. A value of 100 means 1 flow every 100 is sampled. A value of 0 or 1 means all flows are captured. The lower the value, the increase in returned flows and the accuracy of derived metrics. By default, eBPF sampling is set to a value of 50, so 1 flow every 50 is sampled. Note that more sampled flows also means more storage needed. It is recommend to start with default values and refine empirically, to determine which setting your cluster can manage.
8989
<3> The optional specifications `spec.processor.logTypes`, `spec.processor.conversationHeartbeatInterval`, and `spec.processor.conversationEndTimeout` can be set to enable conversation tracking. When enabled, conversation events are queryable in the web console. The values for `spec.processor.logTypes` are as follows: `FLOWS` `CONVERSATIONS`, `ENDED_CONVERSATIONS`, or `ALL`. Storage requirements are highest for `ALL` and lowest for `ENDED_CONVERSATIONS`.
90-
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the Loki Operator section. If you used another installation method for Loki, specify the appropriate client information for your install.
90+
<4> The Loki specification, `spec.loki`, specifies the Loki client. The default values match the Loki install paths mentioned in the Installing the {loki-op} section. If you used another installation method for Loki, specify the appropriate client information for your install.
9191
<5> The original certificates are copied to the Network Observability instance namespace and watched for updates. When not provided, the namespace defaults to be the same as "spec.namespace". If you chose to install Loki in a different namespace, you must specify it in the `spec.loki.tls.caCert.namespace` field. Similarly, the `spec.exporters.kafka.tls.caCert.namespace` field is available for Kafka installed in a different namespace.
9292
<6> The `spec.quickFilters` specification defines filters that show up in the web console. The `Application` filter keys,`src_namespace` and `dst_namespace`, are negated (`!`), so the `Application` filter shows all traffic that _does not_ originate from, or have a destination to, any `openshift-` or `netobserv` namespaces. For more information, see Configuring quick filters below.

modules/network-observability-kafka-option.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@
55
:_mod-docs-content-type: CONCEPT
66
[id="network-observability-kafka-option_{context}"]
77
= Installing Kafka (optional)
8-
The Kafka Operator is supported for large scale environments. Kafka provides high-throughput and low-latency data feeds for forwarding network flow data in a more resilient, scalable way. You can install the Kafka Operator as link:https://access.redhat.com/documentation/en-us/red_hat_amq_streams/2.2[Red Hat AMQ Streams] from the Operator Hub, just as the Loki Operator and Network Observability Operator were installed. Refer to "Configuring the FlowCollector resource with Kafka" to configure Kafka as a storage option.
8+
The Kafka Operator is supported for large scale environments. Kafka provides high-throughput and low-latency data feeds for forwarding network flow data in a more resilient, scalable way. You can install the Kafka Operator as link:https://access.redhat.com/documentation/en-us/red_hat_amq_streams/2.2[Red Hat AMQ Streams] from the Operator Hub, just as the {loki-op} and Network Observability Operator were installed. Refer to "Configuring the FlowCollector resource with Kafka" to configure Kafka as a storage option.
99

1010
[NOTE]
1111
====
1212
To uninstall Kafka, refer to the uninstallation process that corresponds with the method you used to install.
13-
====
13+
====

modules/network-observability-loki-install.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="network-observability-loki-installation_{context}"]
7-
= Installing the Loki Operator
8-
The link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator versions 5.7+] are the supported Loki Operator versions for Network Observabilty; these versions provide the ability to create a `LokiStack` instance using the `openshift-network` tenant configuration mode and provide fully-automatic, in-cluster authentication and authorization support for Network Observability. There are several ways you can install Loki. One way is by using the {product-title} web console Operator Hub.
7+
= Installing the {loki-op}
8+
The link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[{loki-op} versions 5.7+] are the supported {loki-op} versions for Network Observabilty; these versions provide the ability to create a `LokiStack` instance using the `openshift-network` tenant configuration mode and provide fully-automatic, in-cluster authentication and authorization support for Network Observability. There are several ways you can install Loki. One way is by using the {product-title} web console Operator Hub.
99

1010
.Prerequisites
1111

@@ -15,12 +15,12 @@ The link:https://catalog.redhat.com/software/containers/openshift-logging/loki-r
1515
1616
.Procedure
1717
. In the {product-title} web console, click *Operators* -> *OperatorHub*.
18-
. Choose *Loki Operator* from the list of available Operators, and click *Install*.
18+
. Choose *{loki-op}* from the list of available Operators, and click *Install*.
1919
. Under *Installation Mode*, select *All namespaces on the cluster*.
2020

2121
.Verification
22-
. Verify that you installed the Loki Operator. Visit the *Operators**Installed Operators* page and look for *Loki Operator*.
23-
. Verify that *Loki Operator* is listed with *Status* as *Succeeded* in all the projects.
22+
. Verify that you installed the {loki-op}. Visit the *Operators**Installed Operators* page and look for *{loki-op}*.
23+
. Verify that *{loki-op}* is listed with *Status* as *Succeeded* in all the projects.
2424

2525
[IMPORTANT]
2626
====

modules/network-observability-loki-secret.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
:_mod-docs-content-type: PROCEDURE
66
[id="network-observability-loki-secret_{context}"]
77
= Creating a secret for Loki storage
8-
The Loki Operator supports a few log storage options, such as AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation. The following example shows how to create a secret for AWS S3 storage. The secret created in this example, `loki-s3`, is referenced in "Creating a LokiStack resource". You can create this secret in the web console or CLI.
8+
The {loki-op} supports a few log storage options, such as AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation. The following example shows how to create a secret for AWS S3 storage. The secret created in this example, `loki-s3`, is referenced in "Creating a LokiStack resource". You can create this secret in the web console or CLI.
99

1010
. Using the web console, navigate to the *Project* -> *All Projects* dropdown and select *Create Project*. Name the project `netobserv` and click *Create*.
1111
. Navigate to the Import icon, *+*, in the top right corner. Paste your YAML file into the editor.

modules/network-observability-multitenancy.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki. Access is enabled for project admins. Project admins who have limited access to some namespaces can access flows for only those namespaces.
99

1010
.Prerequisite
11-
* You have installed link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[Loki Operator version 5.7]
11+
* You have installed link:https://catalog.redhat.com/software/containers/openshift-logging/loki-rhel8-operator/622b46bcae289285d6fcda39[{loki-op} version 5.7]
1212
* The `FlowCollector` `spec.loki.authToken` configuration must be set to `FORWARD`.
1313
* You must be logged in as a project administrator
1414
@@ -22,4 +22,4 @@ $ oc adm policy add-cluster-role-to-user netobserv-reader user1
2222
----
2323
+
2424
Now, the data is restricted to only allowed user namespaces. For example, a user that has access to a single namespace can see all the flows internal to this namespace, as well as flows going from and to this namespace.
25-
Project admins have access to the Administrator perspective in the {product-title} console to access the Network Flows Traffic page.
25+
Project admins have access to the Administrator perspective in the {product-title} console to access the Network Flows Traffic page.

0 commit comments

Comments
 (0)