You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/cluster-logging-visualizer-indices.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ An index pattern defines the Elasticsearch indices that you want to visualize. T
11
11
12
12
* A user must have the `cluster-admin` role, the `cluster-reader` role, or both roles to view the *infra* and *audit* indices in Kibana. The default `kubeadmin` user has proper permissions to view these indices.
13
13
+
14
-
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access the these indices. You can use the following command to check if the current user has appropriate permissions:
14
+
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:
Copy file name to clipboardExpand all lines: modules/cluster-logging-visualizer-kibana.adoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,9 +13,9 @@ You view cluster logs in the Kibana web console. The methods for viewing and vis
13
13
14
14
* Kibana index patterns must exist.
15
15
16
-
* A user must have the `cluster-admin` role, the `cluster-reader` role, or both roles to view the *infra* and *audit* indices in Kibana. The default `kubeadmin` user has proper permissions to view these indices.
16
+
* A user must have the `cluster-admin` role, the `cluster-reader` role, or both roles to view the *infra* and *audit* indices in Kibana. The default `kubeadmin` user has proper permissions to view these indices.
17
17
+
18
-
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access the these indices. You can use the following command to check if the current user has appropriate permissions:
18
+
If you can view the pods and logs in the `default`, `kube-` and `openshift-` projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions:
Copy file name to clipboardExpand all lines: modules/cluster-logging-visualizer-launch.adoc
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,9 +10,9 @@ pie charts, heat maps, built-in geospatial support, and other visualizations.
10
10
11
11
.Prerequisites
12
12
13
-
* To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user has proper permissions to list these indices.
13
+
* To list the *infra* and *audit* indices in Kibana, a user must have the `cluster-admin` role, the `cluster-reader` role, or both roles. The default `kubeadmin` user has proper permissions to list these indices.
14
14
+
15
-
If you can view the pods and logs in the `default`, `kube-*` and `openshift-*` projects, you should be able to access the these indices. You can use the following command to check if the current user has proper permissions:
15
+
If you can view the pods and logs in the `default`, `kube-*` and `openshift-*` projects, you should be able to access these indices. You can use the following command to check if the current user has proper permissions:
16
16
+
17
17
[source,terminal]
18
18
----
@@ -44,4 +44,3 @@ The Kibana interface launches.
44
44
====
45
45
If you get a *security_exception* error in the Kibana console and cannot access your Kibana indices, you might have an expired OAuth token. If you see this error, log out of the Kibana console, and then log back in. This refreshes your OAuth tokens and you should be able to access your indices.
Copy file name to clipboardExpand all lines: modules/ipi-install-additional-install-config-parameters.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -143,7 +143,7 @@ a|`provisioningNetworkCIDR`
143
143
144
144
|`bootstrapProvisioningIP`
145
145
|The second IP address of the `provisioningNetworkCIDR`.
146
-
|The IP on the bootstrap VM where the provisioning services run while the the installer is deploying the control plane (master) nodes. Defaults to the second IP of the `provisioning` subnet. For example, `172.22.0.2`
146
+
|The IP on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP of the `provisioning` subnet. For example, `172.22.0.2`
After making the change to the `Scheduler` config resource, wait for the `opensift-kube-apiserver` pods to redeploy. This can take several minutes. Until the pods redeploy, new scheduler does not take effect.
92
+
After making the change to the `Scheduler` config resource, wait for the `openshift-kube-apiserver` pods to redeploy. This can take several minutes. Until the pods redeploy, new scheduler does not take effect.
93
93
94
94
. Verify the scheduler policy is configured by viewing the log of a scheduler pod in the `openshift-kube-scheduler` namespace. The following command checks for the predicates and priorities that are being registered by the scheduler:
Copy file name to clipboardExpand all lines: modules/nodes-scheduler-node-selectors-cluster.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ the pod on nodes with matching labels.
12
12
13
13
You configure cluster-wide node selectors by creating a Scheduler Operator custom resource (CR). You add labels to a node by editing a `Node` object, a `MachineSet` object, or a `MachineConfig` object. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down.
14
14
15
-
For example, the the Scheduler configures the cluster-wide `region=east` node selector:
15
+
For example, the Scheduler configures the cluster-wide `region=east` node selector:
Copy file name to clipboardExpand all lines: modules/serverless-config-replicas.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
[id="serverless-config-replicas_{context}"]
6
6
= Configuring high availability replicas on {ServerlessProductName}
7
7
8
-
High availability (HA) functionality is available by default on {ServerlessProductName} for the `autoscaler-hpa`, `controller`, `activator`, `kourier-control`, and `kourier-gateway` controllers. These components are configured with two replicas by default.
8
+
High availability (HA) functionality is available by default on {ServerlessProductName} for the `autoscaler-hpa`, `controller`, `activator`, `kourier-control`, and `kourier-gateway` controllers. These components are configured with two replicas by default.
9
9
10
10
You modify the number of replicas that are created per controller by changing the configuration of `KnativeServing.spec.highAvailability` in the KnativeServing custom resource definition.
11
11
// This field also specifies the minimum number of _activators_ if you are using the horizontal pod autoscaler (HPA). For more information about HPA, see
= Disabling Machine Config Operator from automatically rebooting
6
+
= Disabling Machine Config Operator from automatically rebooting
7
7
8
8
When configuration changes are made by the Machine Config Operator (MCO), {op-system-first} must reboot for the changes to take effect. Whether the configuration change is automatic, such as when a `kube-apiserver-to-kubelet-signer` certificate authority (CA) is rotated, or manual, an {op-system} node reboots automatically unless it is paused.
9
9
@@ -15,7 +15,7 @@ The following modifications do not trigger a node reboot:
15
15
* changes to the global pull secret or pull secret in the `openshift-config` namespace
16
16
* changes to the `/etc/containers/registries.conf` file, such as adding or editing an `ImageContentSourcePolicy` object
17
17
18
-
When the MCO detects any of these changes, it drains the corresponding nodes, applies the changes, and uncordons the nodes.
18
+
When the MCO detects any of these changes, it drains the corresponding nodes, applies the changes, and uncordons the nodes.
19
19
====
20
20
21
21
To avoid unwanted disruptions, you can modify the machine config pool to prevent automatic rebooting after the Operator makes changes to the machine config.
@@ -62,7 +62,7 @@ Pausing a machine config pool stops all system reboot processes and all configur
62
62
# oc get machineconfigpool/worker --template='{{.spec.paused}}'
63
63
----
64
64
+
65
-
The `spec.paused` field is `true` and the the machine config pool is paused.
65
+
The `spec.paused` field is `true` and the machine config pool is paused.
66
66
67
67
. Alternatively, to unpause the autoreboot process:
68
68
@@ -99,7 +99,7 @@ By unpausing a machine config pool, all paused changes are applied at reboot.
99
99
# oc get machineconfigpool/worker --template='{{.spec.paused}}'
100
100
----
101
101
+
102
-
The `spec.paused` field is `false` and the the machine config pool is unpaused.
102
+
The `spec.paused` field is `false` and the machine config pool is unpaused.
103
103
104
104
. To see if the machine config pool has pending changes:
Copy file name to clipboardExpand all lines: modules/virt-importing-vm-datavolume.adoc
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
[id="virt-importing-vm-datavolume_{context}"]
6
6
= Importing a virtual machine image into a persistent volume claim by using a data volume
7
7
8
-
You can import a virtual machine image into a persistent volume claim (PVC) by using a data volume.
8
+
You can import a virtual machine image into a persistent volume claim (PVC) by using a data volume.
9
9
10
10
The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or the image can be built into a container disk and stored in a container registry.
<1> The source type to import the image from. This example uses a HTTP endpoint. To import a container disk from a registry, replace `http` with `registry`.
108
+
<1> The source type to import the image from. This example uses an HTTP endpoint. To import a container disk from a registry, replace `http` with `registry`.
109
109
<2> The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTP endpoint. An example of a container registry endpoint is `url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"`.
110
110
<3> The `secretRef` parameter is optional.
111
111
<4> The `certConfigMap` is required for communicating with servers that use self-signed certificates or certificates not signed by the system CA bundle. The referenced config map must be in the same namespace as the data volume.
Tasks in this section let you create MachineConfig objects to modify files, systemd unit files, and other operating system features running on {product-title} nodes. For more ideas on working with MachineConfigs, see
30
+
Tasks in this section let you create MachineConfig objects to modify files, systemd unit files, and other operating system features running on {product-title} nodes. For more ideas on working with MachineConfigs, see
31
31
content related to link:https://access.redhat.com/solutions/5307301[changing MTU network settings], link:https://access.redhat.com/solutions/5096731[adding] or
32
-
link:https://access.redhat.com/solutions/4510281[updating] SSH authorized keys, , link:https://access.redhat.com/solutions/4518671[replacing DNS nameservers], link:https://access.redhat.com/verify-images-ocp4[verifying image signatures], link:https://access.redhat.com/solutions/4727321[enabling SCTP], and link:https://access.redhat.com/solutions/5170251[configuring iSCSI initiatornames] for {product-title}.
32
+
link:https://access.redhat.com/solutions/4510281[updating] SSH authorized keys, link:https://access.redhat.com/solutions/4518671[replacing DNS nameservers], link:https://access.redhat.com/verify-images-ocp4[verifying image signatures], link:https://access.redhat.com/solutions/4727321[enabling SCTP], and link:https://access.redhat.com/solutions/5170251[configuring iSCSI initiatornames] for {product-title}.
33
33
34
-
MachineConfigs
34
+
MachineConfigs
35
35
36
36
{product-title} version 4.6 supports
37
-
link:https://github.com/coreos/ignition/blob/master/docs/configuration-v3_1.md[Ignition specification version 3.1].
37
+
link:https://github.com/coreos/ignition/blob/master/docs/configuration-v3_1.md[Ignition specification version 3.1].
38
38
All new MachineConfigs you create going forward should be based on
39
39
Ignition specification version 3.1. If you are upgrading your {product-title} cluster,
40
40
any existing Ignition specification version 2.x MachineConfigs will be translated automatically to
0 commit comments