Skip to content

Commit c33cb8b

Browse files
committed
RHDEVDOCS-4259 - GCP & minor eventrouter correction w/peer rev
1 parent c1cc43d commit c33cb8b

File tree

3 files changed

+64
-4
lines changed

3 files changed

+64
-4
lines changed

logging/cluster-logging-external.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]
66

77
toc::[]
88

9-
By default, the {logging} sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
9+
By default, the {logging} sends container and infrastructure logs to the default internal log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
1010

1111
To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.
1212

@@ -188,6 +188,7 @@ include::modules/cluster-logging-troubleshooting-loki-entry-out-of-order-errors.
188188
* xref:../logging/cluster-logging-exported-fields.adoc#cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields[Log Record Fields].
189189
* link:https://grafana.com/docs/loki/latest/configuration/[Configuring Loki server]
190190

191+
include::modules/cluster-logging-collector-log-forward-gcp.adoc[leveloffset=+1]
191192

192193
include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1]
193194

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
// Module included in the following assemblies:
2+
// cluster-logging-external.adoc
3+
//
4+
5+
:_content-type: PROCEDURE
6+
[id="cluster-logging-collector-log-forward-gcp_{context}"]
7+
= Forwarding logs to Google Cloud Platform (GCP)
8+
9+
You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging] in addition to, or instead of, the internal default {product-title} log store.
10+
11+
[NOTE]
12+
====
13+
Using this feature with Fluentd is not supported.
14+
====
15+
16+
.Prerequisites
17+
* {logging-title-uc} Operator 5.5.1 and later
18+
19+
.Procedure
20+
21+
. Create a secret using your link:https://cloud.google.com/iam/docs/creating-managing-service-account-keys[Google service account key].
22+
+
23+
[source,terminal,subs="+quotes"]
24+
----
25+
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=_<your_service_account_key_file.json>_
26+
----
27+
. Create a `ClusterLogForwarder` Custom Resource YAML using the template below:
28+
+
29+
[source,yaml]
30+
----
31+
apiVersion: "logging.openshift.io/v1"
32+
kind: "ClusterLogForwarder"
33+
metadata:
34+
name: "instance"
35+
namespace: "openshift-logging"
36+
spec:
37+
outputs:
38+
- name: gcp-1
39+
type: googleCloudLogging
40+
secret:
41+
name: gcp-secret
42+
googleCloudLogging:
43+
projectId : "openshift-gce-devel" <1>
44+
logId : "app-gcp" <2>
45+
pipelines:
46+
- name: test-app
47+
inputRefs: <3>
48+
- application
49+
outputRefs:
50+
- gcp-1
51+
----
52+
<1> Set either a `projectId`, `folderId`, `organizationId`, or `billingAccountId` field and its corresponding value, depending on where you want to store your logs in the link:https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy[GCP resource hierarchy].
53+
<2> Set the value to add to the `logName` field of the link:https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry[Log Entry].
54+
<3> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`.
55+
56+
[role="_additional-resources"]
57+
.Additional resources
58+
* link:https://cloud.google.com/billing/docs/concepts[Google Cloud Billing Documentation]
59+
* link:https://cloud.google.com/logging/docs/view/logging-query-language[Google Cloud Logging Query Language Documentation]

modules/cluster-logging-eventrouter-deploy.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ objects:
107107
parameters:
108108
- name: IMAGE <6>
109109
displayName: Image
110-
value: "registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.3"
110+
value: "registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4"
111111
- name: CPU <7>
112112
displayName: CPU
113113
value: "100m"
@@ -124,8 +124,8 @@ parameters:
124124
<4> Creates a config map in the `openshift-logging` project to generate the required `config.json` file.
125125
<5> Creates a deployment in the `openshift-logging` project to generate and configure the Event Router pod.
126126
<6> Specifies the image, identified by a tag such as `v0.3`.
127-
<7> Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to `128Mi`.
128-
<8> Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to `100m`.
127+
<7> Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to `100m`.
128+
<8> Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to `128Mi`.
129129
<9> Specifies the `openshift-logging` project to install objects in.
130130

131131
. Use the following command to process and apply the template:

0 commit comments

Comments
 (0)